Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/141264 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-20904 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-tzLpaNzTNqqV/agent.2049 SSH_AGENT_PID=2051 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp@tmp/private_key_6775323941563531320.key (/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp@tmp/private_key_6775323941563531320.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/64/141264/1 # timeout=30 > git rev-parse 473f78ecac5fb75e5968b31a5bab95eaba72c803^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision 473f78ecac5fb75e5968b31a5bab95eaba72c803 (refs/changes/64/141264/1) > git config core.sparsecheckout # timeout=10 > git checkout -f 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=30 Commit message: "Add Fix fail handling in ACM runtime in CSIT" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins10686153200799223814.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-qCrN lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-qCrN/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-qCrN/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.36 botocore==1.38.36 bs4==0.0.2 cachetools==5.5.2 certifi==2025.4.26 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/sh /tmp/jenkins2896797054681847457.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/sh -xe /tmp/jenkins15184397887947982739.sh + /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/run-project-csit.sh xacml-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.2M 100 60.2M 0 0 71.7M 0 --:--:-- --:--:-- --:--:-- 71.7M Setting project configuration for: xacml-pdp Configuring docker compose... Starting xacml-pdp using postgres + Grafana/Prometheus xacml-pdp Pulling api Pulling zookeeper Pulling policy-db-migrator Pulling prometheus Pulling postgres Pulling kafka Pulling grafana Pulling pap Pulling da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 795b910b71c0 Pulling fs layer d1bdb495a7aa Pulling fs layer 0444d3911dbb Pulling fs layer b801adf990e2 Pulling fs layer d1bdb495a7aa Waiting 0444d3911dbb Waiting b801adf990e2 Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer e5d7009d9e55 Waiting c124ba1a8b26 Pulling fs layer 1ec5fb03eaee Waiting 6394804c2196 Pulling fs layer 6394804c2196 Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 0d92cad902ba Waiting dcc0c3b2850c Waiting 684be6598fc9 Waiting 5e06c6bed798 Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Pulling fs layer 56aca8a42329 Pulling fs layer fbe227156a9a Pulling fs layer b56567b07821 Pulling fs layer f243361b999b Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 7abf0dc59d35 Pulling fs layer 56aca8a42329 Waiting fbe227156a9a Waiting b56567b07821 Waiting 991de477d40a Pulling fs layer 5efc16ba9cdc Pulling fs layer f243361b999b Waiting 5efc16ba9cdc Waiting 991de477d40a Waiting 795b910b71c0 Downloading [> ] 31.67kB/2.323MB f18232174bc9 Pulling fs layer 65babbe3dfe5 Pulling fs layer 651b0ba49b07 Pulling fs layer d953cde4314b Pulling fs layer aecd4cb03450 Pulling fs layer 13fa68ca8757 Pulling fs layer f836d47fdc4d Pulling fs layer 8b5292c940e1 Pulling fs layer 454a4350d439 Pulling fs layer 9a8c18aee5ea Pulling fs layer 13fa68ca8757 Waiting f18232174bc9 Waiting 65babbe3dfe5 Waiting 8b5292c940e1 Waiting d953cde4314b Waiting 651b0ba49b07 Waiting 454a4350d439 Waiting 9a8c18aee5ea Waiting aecd4cb03450 Waiting 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer 46baca71a4ef Waiting b0e0ef7895f4 Waiting 09d5a3f70313 Waiting c0c90eeb8aca Waiting 5cfb27c10ea5 Waiting 356f5c2c843b Waiting 40a5eed61bb0 Waiting 55f2b468da67 Waiting 82bfc142787e Waiting 1e017ebebdbd Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer 2d429b9e73a6 Waiting e73cb4a42719 Pulling fs layer 531ee2cf3c0c Waiting 46eab5b44a35 Waiting a83b68436f09 Pulling fs layer ed54a7dee1d8 Waiting 787d6bee9571 Pulling fs layer 01e0882c90d9 Waiting 13ff0988aaea Pulling fs layer c4d302cc468d Waiting 4b82842ab819 Pulling fs layer 12c5c803443f Waiting e27c75a98748 Waiting 7e568a0dc8fb Pulling fs layer 787d6bee9571 Waiting a83b68436f09 Waiting 13ff0988aaea Waiting 4b82842ab819 Waiting e73cb4a42719 Waiting eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer c955f6e31a04 Pulling fs layer f3a82e9f1761 Waiting 79161a3f5362 Waiting 9c266ba63f51 Waiting 2e8a7df9c2ee Waiting 10f05dd8b1db Waiting 41dac8b43ba6 Waiting 71a9f6a9ab4d Waiting da3ed5db7103 Waiting c955f6e31a04 Waiting e444bcd4d577 Waiting eca0188f477e Waiting eabd8714fec9 Waiting 45fd2fec8a19 Waiting 8f10199ed94b Waiting f963a77d2726 Waiting 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer 9fa9226be034 Waiting 1617e25568b2 Waiting 6ac0e4adf315 Waiting f3b09c502777 Waiting 1ccde423731d Waiting 7221d93db8a9 Waiting 44986281b8b9 Waiting 7df673c7455d Waiting bf70c5107ab5 Waiting da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB 795b910b71c0 Downloading [==================================================>] 2.323MB/2.323MB 795b910b71c0 Verifying Checksum 795b910b71c0 Download complete 0444d3911dbb Downloading [==================================================>] 1.2kB/1.2kB 0444d3911dbb Download complete d1bdb495a7aa Downloading [> ] 539.6kB/58.78MB b801adf990e2 Downloading [==================================================>] 1.17kB/1.17kB b801adf990e2 Verifying Checksum b801adf990e2 Download complete e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Verifying Checksum e5d7009d9e55 Download complete 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 1ec5fb03eaee Download complete d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Verifying Checksum d3165a332ae3 Download complete da9db072f522 Extracting [====================> ] 1.507MB/3.624MB da9db072f522 Extracting [====================> ] 1.507MB/3.624MB da9db072f522 Extracting [====================> ] 1.507MB/3.624MB da9db072f522 Extracting [====================> ] 1.507MB/3.624MB c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB d1bdb495a7aa Downloading [===========> ] 12.98MB/58.78MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete c124ba1a8b26 Downloading [====> ] 8.65MB/91.87MB d1bdb495a7aa Downloading [=======================> ] 28.11MB/58.78MB c124ba1a8b26 Downloading [==========> ] 18.38MB/91.87MB d1bdb495a7aa Downloading [=====================================> ] 44.33MB/58.78MB c124ba1a8b26 Downloading [==================> ] 34.6MB/91.87MB d1bdb495a7aa Verifying Checksum d1bdb495a7aa Download complete 6394804c2196 Download complete 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Verifying Checksum 5e06c6bed798 Download complete c124ba1a8b26 Downloading [=========================> ] 47.04MB/91.87MB 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Download complete 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Download complete 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB c124ba1a8b26 Downloading [==================================> ] 62.72MB/91.87MB 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB dcc0c3b2850c Downloading [=====> ] 8.65MB/76.12MB c124ba1a8b26 Downloading [===========================================> ] 79.48MB/91.87MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete 96e38c8865ba Downloading [============> ] 18.38MB/71.91MB 96e38c8865ba Downloading [============> ] 18.38MB/71.91MB 96e38c8865ba Downloading [============> ] 18.38MB/71.91MB eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Verifying Checksum eb7cda286a15 Download complete dcc0c3b2850c Downloading [==========> ] 16.22MB/76.12MB 56aca8a42329 Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [========================> ] 34.6MB/71.91MB 96e38c8865ba Downloading [========================> ] 34.6MB/71.91MB 96e38c8865ba Downloading [========================> ] 34.6MB/71.91MB dcc0c3b2850c Downloading [==================> ] 28.65MB/76.12MB 56aca8a42329 Downloading [==> ] 3.243MB/71.91MB 96e38c8865ba Downloading [==================================> ] 50.28MB/71.91MB 96e38c8865ba Downloading [==================================> ] 50.28MB/71.91MB 96e38c8865ba Downloading [==================================> ] 50.28MB/71.91MB dcc0c3b2850c Downloading [=========================> ] 39.47MB/76.12MB 56aca8a42329 Downloading [====> ] 5.946MB/71.91MB 96e38c8865ba Downloading [==============================================> ] 66.5MB/71.91MB 96e38c8865ba Downloading [==============================================> ] 66.5MB/71.91MB 96e38c8865ba Downloading [==============================================> ] 66.5MB/71.91MB dcc0c3b2850c Downloading [==================================> ] 52.44MB/76.12MB 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Download complete 96e38c8865ba Download complete 56aca8a42329 Downloading [=======> ] 10.27MB/71.91MB fbe227156a9a Downloading [> ] 146.4kB/14.63MB dcc0c3b2850c Downloading [===========================================> ] 66.5MB/76.12MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 56aca8a42329 Downloading [============> ] 18.38MB/71.91MB fbe227156a9a Downloading [============> ] 3.538MB/14.63MB b56567b07821 Downloading [==================================================>] 1.077kB/1.077kB b56567b07821 Verifying Checksum b56567b07821 Download complete f243361b999b Downloading [============================> ] 3.003kB/5.242kB f243361b999b Downloading [==================================================>] 5.242kB/5.242kB f243361b999b Download complete 7abf0dc59d35 Downloading [==================================================>] 1.035kB/1.035kB 7abf0dc59d35 Download complete 991de477d40a Downloading [==================================================>] 1.035kB/1.035kB 991de477d40a Verifying Checksum 991de477d40a Download complete 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 5efc16ba9cdc Downloading [=======> ] 3.002kB/19.52kB 5efc16ba9cdc Downloading [==================================================>] 19.52kB/19.52kB 5efc16ba9cdc Verifying Checksum 5efc16ba9cdc Download complete f18232174bc9 Downloading [> ] 48.06kB/3.642MB 56aca8a42329 Downloading [=====================> ] 30.28MB/71.91MB fbe227156a9a Downloading [================================================> ] 14.3MB/14.63MB fbe227156a9a Verifying Checksum fbe227156a9a Download complete 65babbe3dfe5 Downloading [==================================================>] 141B/141B 65babbe3dfe5 Verifying Checksum 65babbe3dfe5 Download complete 651b0ba49b07 Downloading [> ] 48.06kB/3.524MB f18232174bc9 Verifying Checksum f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB 651b0ba49b07 Downloading [==================================================>] 3.524MB/3.524MB 651b0ba49b07 Verifying Checksum 651b0ba49b07 Download complete 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB d953cde4314b Downloading [> ] 97.22kB/8.735MB aecd4cb03450 Downloading [==> ] 3.01kB/58.08kB aecd4cb03450 Downloading [==================================================>] 58.08kB/58.08kB aecd4cb03450 Verifying Checksum aecd4cb03450 Download complete 56aca8a42329 Downloading [===============================> ] 44.87MB/71.91MB 13fa68ca8757 Downloading [=====> ] 3.01kB/27.77kB 13fa68ca8757 Downloading [==================================================>] 27.77kB/27.77kB 13fa68ca8757 Verifying Checksum 13fa68ca8757 Download complete f836d47fdc4d Downloading [> ] 539.6kB/107.3MB f18232174bc9 Extracting [==========> ] 786.4kB/3.642MB d953cde4314b Downloading [=======================================> ] 6.978MB/8.735MB 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB 56aca8a42329 Downloading [=========================================> ] 59.47MB/71.91MB d953cde4314b Verifying Checksum d953cde4314b Download complete f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 8b5292c940e1 Downloading [> ] 539.6kB/63.48MB f836d47fdc4d Downloading [===> ] 7.568MB/107.3MB f18232174bc9 Pull complete 65babbe3dfe5 Extracting [==================================================>] 141B/141B 56aca8a42329 Verifying Checksum 56aca8a42329 Download complete 65babbe3dfe5 Extracting [==================================================>] 141B/141B 454a4350d439 Downloading [============> ] 3.01kB/11.93kB 454a4350d439 Downloading [==================================================>] 11.93kB/11.93kB 454a4350d439 Verifying Checksum 454a4350d439 Download complete 96e38c8865ba Extracting [==============> ] 20.61MB/71.91MB 96e38c8865ba Extracting [==============> ] 20.61MB/71.91MB 96e38c8865ba Extracting [==============> ] 20.61MB/71.91MB 9a8c18aee5ea Downloading [==================================================>] 1.227kB/1.227kB 9a8c18aee5ea Verifying Checksum 9a8c18aee5ea Download complete 8b5292c940e1 Downloading [==> ] 3.784MB/63.48MB f836d47fdc4d Downloading [=========> ] 20MB/107.3MB 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 56aca8a42329 Extracting [> ] 557.1kB/71.91MB 65babbe3dfe5 Pull complete 651b0ba49b07 Extracting [> ] 65.54kB/3.524MB 96e38c8865ba Extracting [=================> ] 25.07MB/71.91MB 96e38c8865ba Extracting [=================> ] 25.07MB/71.91MB 96e38c8865ba Extracting [=================> ] 25.07MB/71.91MB f836d47fdc4d Downloading [================> ] 34.6MB/107.3MB 8b5292c940e1 Downloading [========> ] 11.35MB/63.48MB 1e017ebebdbd Downloading [=======> ] 5.275MB/37.19MB 56aca8a42329 Extracting [==> ] 3.342MB/71.91MB 651b0ba49b07 Extracting [===========> ] 786.4kB/3.524MB 96e38c8865ba Extracting [====================> ] 29.52MB/71.91MB 96e38c8865ba Extracting [====================> ] 29.52MB/71.91MB 96e38c8865ba Extracting [====================> ] 29.52MB/71.91MB f836d47fdc4d Downloading [======================> ] 48.66MB/107.3MB 8b5292c940e1 Downloading [================> ] 21.09MB/63.48MB 1e017ebebdbd Downloading [===================> ] 14.32MB/37.19MB 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 56aca8a42329 Extracting [====> ] 6.685MB/71.91MB 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB f836d47fdc4d Downloading [=============================> ] 63.8MB/107.3MB 651b0ba49b07 Pull complete d953cde4314b Extracting [> ] 98.3kB/8.735MB 8b5292c940e1 Downloading [=======================> ] 30.28MB/63.48MB 1e017ebebdbd Downloading [==============================> ] 22.99MB/37.19MB 56aca8a42329 Extracting [========> ] 11.7MB/71.91MB 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB f836d47fdc4d Downloading [====================================> ] 78.94MB/107.3MB 8b5292c940e1 Downloading [================================> ] 41.09MB/63.48MB 1e017ebebdbd Downloading [============================================> ] 33.16MB/37.19MB d953cde4314b Extracting [==> ] 393.2kB/8.735MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete 56aca8a42329 Extracting [==========> ] 15.6MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 55f2b468da67 Downloading [> ] 539.6kB/257.9MB f836d47fdc4d Downloading [===========================================> ] 94.08MB/107.3MB 8b5292c940e1 Downloading [==========================================> ] 54.07MB/63.48MB d953cde4314b Extracting [==========================> ] 4.62MB/8.735MB 56aca8a42329 Extracting [=============> ] 18.94MB/71.91MB 8b5292c940e1 Verifying Checksum 8b5292c940e1 Download complete 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 55f2b468da67 Downloading [=> ] 6.487MB/257.9MB 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB f836d47fdc4d Verifying Checksum f836d47fdc4d Download complete 82bfc142787e Downloading [> ] 97.22kB/8.613MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 46baca71a4ef Verifying Checksum 46baca71a4ef Download complete d953cde4314b Extracting [======================================> ] 6.783MB/8.735MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 1e017ebebdbd Extracting [=====> ] 4.325MB/37.19MB 55f2b468da67 Downloading [===> ] 18.38MB/257.9MB 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 82bfc142787e Downloading [===============================> ] 5.504MB/8.613MB 56aca8a42329 Extracting [===============> ] 22.84MB/71.91MB d953cde4314b Extracting [==================================================>] 8.735MB/8.735MB b0e0ef7895f4 Downloading [======> ] 4.521MB/37.01MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum c0c90eeb8aca Download complete 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Download complete 1e017ebebdbd Extracting [===========> ] 8.258MB/37.19MB 55f2b468da67 Downloading [======> ] 33.52MB/257.9MB 96e38c8865ba Extracting [====================================> ] 52.36MB/71.91MB 96e38c8865ba Extracting [====================================> ] 52.36MB/71.91MB 96e38c8865ba Extracting [====================================> ] 52.36MB/71.91MB 56aca8a42329 Extracting [==================> ] 26.74MB/71.91MB 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Download complete b0e0ef7895f4 Downloading [=============> ] 10.17MB/37.01MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 55f2b468da67 Downloading [=========> ] 50.28MB/257.9MB 1e017ebebdbd Extracting [================> ] 12.58MB/37.19MB 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 56aca8a42329 Extracting [=====================> ] 30.64MB/71.91MB d953cde4314b Pull complete aecd4cb03450 Extracting [============================> ] 32.77kB/58.08kB aecd4cb03450 Extracting [==================================================>] 58.08kB/58.08kB b0e0ef7895f4 Downloading [===================> ] 14.7MB/37.01MB 09d5a3f70313 Downloading [> ] 2.162MB/109.2MB 55f2b468da67 Downloading [============> ] 67.04MB/257.9MB 96e38c8865ba Extracting [==========================================> ] 61.28MB/71.91MB 96e38c8865ba Extracting [==========================================> ] 61.28MB/71.91MB 96e38c8865ba Extracting [==========================================> ] 61.28MB/71.91MB 1e017ebebdbd Extracting [=====================> ] 16.12MB/37.19MB 56aca8a42329 Extracting [========================> ] 34.54MB/71.91MB aecd4cb03450 Pull complete 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB b0e0ef7895f4 Downloading [==========================> ] 19.59MB/37.01MB 55f2b468da67 Downloading [===============> ] 81.1MB/257.9MB 09d5a3f70313 Downloading [=> ] 3.784MB/109.2MB 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 1e017ebebdbd Extracting [==========================> ] 19.66MB/37.19MB 56aca8a42329 Extracting [===========================> ] 38.99MB/71.91MB b0e0ef7895f4 Downloading [=================================> ] 24.87MB/37.01MB 13fa68ca8757 Pull complete 55f2b468da67 Downloading [==================> ] 96.78MB/257.9MB 09d5a3f70313 Downloading [==> ] 5.406MB/109.2MB 1e017ebebdbd Extracting [================================> ] 23.99MB/37.19MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 56aca8a42329 Extracting [=============================> ] 42.89MB/71.91MB b0e0ef7895f4 Downloading [========================================> ] 30.15MB/37.01MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete 96e38c8865ba Pull complete e5d7009d9e55 Extracting [==================================================>] 295B/295B 795b910b71c0 Extracting [> ] 32.77kB/2.323MB 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B 55f2b468da67 Downloading [=====================> ] 111.9MB/257.9MB f836d47fdc4d Extracting [> ] 557.1kB/107.3MB 1e017ebebdbd Extracting [=====================================> ] 27.92MB/37.19MB 09d5a3f70313 Downloading [===> ] 7.568MB/109.2MB 56aca8a42329 Extracting [===============================> ] 45.68MB/71.91MB b0e0ef7895f4 Downloading [===============================================> ] 35.42MB/37.01MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete 55f2b468da67 Downloading [=========================> ] 129.8MB/257.9MB e5d7009d9e55 Pull complete 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB f836d47fdc4d Extracting [=> ] 3.342MB/107.3MB 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 356f5c2c843b Download complete 795b910b71c0 Extracting [=========> ] 458.8kB/2.323MB 795b910b71c0 Extracting [==================================================>] 2.323MB/2.323MB 1e017ebebdbd Extracting [============================================> ] 33.42MB/37.19MB 09d5a3f70313 Downloading [====> ] 10.27MB/109.2MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 56aca8a42329 Extracting [=================================> ] 47.91MB/71.91MB 795b910b71c0 Pull complete 5e06c6bed798 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 55f2b468da67 Downloading [============================> ] 144.9MB/257.9MB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB f836d47fdc4d Extracting [==> ] 5.571MB/107.3MB 2d429b9e73a6 Downloading [=========> ] 5.602MB/29.13MB 1e017ebebdbd Extracting [===============================================> ] 35MB/37.19MB 09d5a3f70313 Downloading [=======> ] 17.3MB/109.2MB 1ec5fb03eaee Pull complete d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 56aca8a42329 Extracting [===================================> ] 50.69MB/71.91MB 55f2b468da67 Downloading [===============================> ] 160.6MB/257.9MB d1bdb495a7aa Extracting [> ] 557.1kB/58.78MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB f836d47fdc4d Extracting [===> ] 8.356MB/107.3MB 2d429b9e73a6 Downloading [==========================> ] 15.33MB/29.13MB 09d5a3f70313 Downloading [============> ] 27.03MB/109.2MB 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB d3165a332ae3 Pull complete 56aca8a42329 Extracting [=====================================> ] 53.48MB/71.91MB 1e017ebebdbd Pull complete 55f2b468da67 Downloading [=================================> ] 174.6MB/257.9MB d1bdb495a7aa Extracting [======> ] 7.242MB/58.78MB f836d47fdc4d Extracting [=====> ] 11.14MB/107.3MB 2d429b9e73a6 Downloading [======================================> ] 22.71MB/29.13MB 09d5a3f70313 Downloading [==================> ] 41.09MB/109.2MB 56aca8a42329 Extracting [=======================================> ] 57.38MB/71.91MB 55f2b468da67 Downloading [====================================> ] 188.7MB/257.9MB 0d92cad902ba Pull complete c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete d1bdb495a7aa Extracting [=============> ] 15.6MB/58.78MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB f836d47fdc4d Extracting [======> ] 14.48MB/107.3MB 09d5a3f70313 Downloading [=======================> ] 51.9MB/109.2MB 56aca8a42329 Extracting [==========================================> ] 61.28MB/71.91MB 55f2b468da67 Downloading [=======================================> ] 203.3MB/257.9MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete c124ba1a8b26 Extracting [====> ] 8.356MB/91.87MB 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB d1bdb495a7aa Extracting [==================> ] 21.73MB/58.78MB 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete 09d5a3f70313 Downloading [=============================> ] 64.34MB/109.2MB 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB f836d47fdc4d Extracting [=======> ] 16.71MB/107.3MB 56aca8a42329 Extracting [============================================> ] 63.5MB/71.91MB 55f2b468da67 Downloading [=========================================> ] 214.6MB/257.9MB c124ba1a8b26 Extracting [========> ] 15.6MB/91.87MB 2d429b9e73a6 Extracting [======> ] 3.539MB/29.13MB d1bdb495a7aa Extracting [=======================> ] 27.3MB/58.78MB dcc0c3b2850c Extracting [=====> ] 8.913MB/76.12MB 09d5a3f70313 Downloading [===================================> ] 78.4MB/109.2MB 531ee2cf3c0c Downloading [============================> ] 4.668MB/8.066MB 55f2b468da67 Downloading [============================================> ] 228.2MB/257.9MB 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete 56aca8a42329 Extracting [==============================================> ] 66.85MB/71.91MB c124ba1a8b26 Extracting [===========> ] 21.73MB/91.87MB ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB f836d47fdc4d Extracting [========> ] 17.83MB/107.3MB d1bdb495a7aa Extracting [===========================> ] 32.31MB/58.78MB dcc0c3b2850c Extracting [=========> ] 14.48MB/76.12MB 2d429b9e73a6 Extracting [============> ] 7.078MB/29.13MB ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 09d5a3f70313 Downloading [==========================================> ] 92.99MB/109.2MB 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Verifying Checksum e27c75a98748 Download complete 55f2b468da67 Downloading [==============================================> ] 242.2MB/257.9MB e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 56aca8a42329 Extracting [================================================> ] 69.63MB/71.91MB f836d47fdc4d Extracting [=========> ] 20.05MB/107.3MB d1bdb495a7aa Extracting [=================================> ] 38.99MB/58.78MB dcc0c3b2850c Extracting [=============> ] 21.17MB/76.12MB c124ba1a8b26 Extracting [===============> ] 28.97MB/91.87MB 2d429b9e73a6 Extracting [================> ] 9.732MB/29.13MB 09d5a3f70313 Downloading [===============================================> ] 103.8MB/109.2MB 55f2b468da67 Downloading [=================================================> ] 257.4MB/257.9MB 56aca8a42329 Extracting [=================================================> ] 71.86MB/71.91MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete e73cb4a42719 Downloading [==> ] 4.865MB/109.1MB f836d47fdc4d Extracting [===========> ] 23.95MB/107.3MB d1bdb495a7aa Extracting [====================================> ] 42.89MB/58.78MB dcc0c3b2850c Extracting [==================> ] 27.85MB/76.12MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete c124ba1a8b26 Extracting [====================> ] 37.32MB/91.87MB 2d429b9e73a6 Extracting [=====================> ] 12.68MB/29.13MB 56aca8a42329 Extracting [==================================================>] 71.91MB/71.91MB a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB a83b68436f09 Verifying Checksum a83b68436f09 Download complete 787d6bee9571 Downloading [==================================================>] 127B/127B 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete eca0188f477e Downloading [> ] 375.7kB/37.17MB eabd8714fec9 Downloading [> ] 539.6kB/375MB e73cb4a42719 Downloading [=======> ] 16.76MB/109.1MB d1bdb495a7aa Extracting [==========================================> ] 50.14MB/58.78MB dcc0c3b2850c Extracting [======================> ] 33.98MB/76.12MB 2d429b9e73a6 Extracting [==========================> ] 15.63MB/29.13MB f836d47fdc4d Extracting [=============> ] 28.97MB/107.3MB c124ba1a8b26 Extracting [=========================> ] 46.79MB/91.87MB 56aca8a42329 Pull complete fbe227156a9a Extracting [> ] 163.8kB/14.63MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB eca0188f477e Downloading [======> ] 4.521MB/37.17MB eabd8714fec9 Downloading [> ] 4.865MB/375MB e73cb4a42719 Downloading [=============> ] 29.2MB/109.1MB d1bdb495a7aa Extracting [=================================================> ] 58.49MB/58.78MB dcc0c3b2850c Extracting [=========================> ] 38.99MB/76.12MB d1bdb495a7aa Extracting [==================================================>] 58.78MB/58.78MB 2d429b9e73a6 Extracting [===============================> ] 18.58MB/29.13MB c124ba1a8b26 Extracting [============================> ] 51.81MB/91.87MB f836d47fdc4d Extracting [===============> ] 33.98MB/107.3MB fbe227156a9a Extracting [=> ] 327.7kB/14.63MB 55f2b468da67 Extracting [=> ] 8.913MB/257.9MB eca0188f477e Downloading [==============> ] 10.55MB/37.17MB d1bdb495a7aa Pull complete 0444d3911dbb Extracting [==================================================>] 1.2kB/1.2kB 0444d3911dbb Extracting [==================================================>] 1.2kB/1.2kB eabd8714fec9 Downloading [=> ] 10.81MB/375MB e73cb4a42719 Downloading [===================> ] 42.17MB/109.1MB dcc0c3b2850c Extracting [=============================> ] 44.56MB/76.12MB 2d429b9e73a6 Extracting [=====================================> ] 21.82MB/29.13MB c124ba1a8b26 Extracting [================================> ] 60.16MB/91.87MB f836d47fdc4d Extracting [================> ] 36.21MB/107.3MB fbe227156a9a Extracting [============> ] 3.768MB/14.63MB 55f2b468da67 Extracting [===> ] 19.5MB/257.9MB eca0188f477e Downloading [======================> ] 16.58MB/37.17MB eabd8714fec9 Downloading [==> ] 18.38MB/375MB e73cb4a42719 Downloading [===========================> ] 60.55MB/109.1MB dcc0c3b2850c Extracting [================================> ] 49.02MB/76.12MB 2d429b9e73a6 Extracting [=========================================> ] 23.89MB/29.13MB c124ba1a8b26 Extracting [=====================================> ] 68.52MB/91.87MB 0444d3911dbb Pull complete f836d47fdc4d Extracting [=================> ] 38.44MB/107.3MB b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB 55f2b468da67 Extracting [====> ] 22.28MB/257.9MB fbe227156a9a Extracting [=================> ] 5.079MB/14.63MB eca0188f477e Downloading [=================================> ] 24.87MB/37.17MB eabd8714fec9 Downloading [===> ] 29.74MB/375MB e73cb4a42719 Downloading [==============================> ] 67.58MB/109.1MB dcc0c3b2850c Extracting [===================================> ] 54.59MB/76.12MB c124ba1a8b26 Extracting [========================================> ] 74.09MB/91.87MB f836d47fdc4d Extracting [==================> ] 39.55MB/107.3MB fbe227156a9a Extracting [====================> ] 6.062MB/14.63MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB eca0188f477e Downloading [=================================================> ] 36.93MB/37.17MB eca0188f477e Download complete 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Download complete eabd8714fec9 Downloading [=====> ] 42.17MB/375MB e73cb4a42719 Downloading [===================================> ] 77.86MB/109.1MB 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB dcc0c3b2850c Extracting [=======================================> ] 60.16MB/76.12MB c124ba1a8b26 Extracting [==========================================> ] 78.54MB/91.87MB f836d47fdc4d Extracting [===================> ] 41.22MB/107.3MB fbe227156a9a Extracting [========================> ] 7.045MB/14.63MB b801adf990e2 Pull complete 2d429b9e73a6 Extracting [==============================================> ] 27.13MB/29.13MB xacml-pdp Pulled eabd8714fec9 Downloading [=======> ] 55.69MB/375MB e73cb4a42719 Downloading [========================================> ] 89.21MB/109.1MB 8f10199ed94b Downloading [========================> ] 4.226MB/8.768MB 55f2b468da67 Extracting [======> ] 31.2MB/257.9MB dcc0c3b2850c Extracting [============================================> ] 67.96MB/76.12MB c124ba1a8b26 Extracting [=============================================> ] 83.56MB/91.87MB eca0188f477e Extracting [> ] 393.2kB/37.17MB f836d47fdc4d Extracting [====================> ] 44.01MB/107.3MB fbe227156a9a Extracting [============================> ] 8.356MB/14.63MB eabd8714fec9 Downloading [========> ] 65.96MB/375MB e73cb4a42719 Downloading [================================================> ] 106.5MB/109.1MB 2d429b9e73a6 Extracting [===============================================> ] 27.72MB/29.13MB 8f10199ed94b Downloading [===============================================> ] 8.355MB/8.768MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete 55f2b468da67 Extracting [=======> ] 39.55MB/257.9MB dcc0c3b2850c Extracting [===============================================> ] 72.97MB/76.12MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete c124ba1a8b26 Extracting [=================================================> ] 90.8MB/91.87MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Verifying Checksum f963a77d2726 Download complete c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB eca0188f477e Extracting [===> ] 2.753MB/37.17MB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete f836d47fdc4d Extracting [=====================> ] 46.79MB/107.3MB fbe227156a9a Extracting [====================================> ] 10.65MB/14.63MB 9c266ba63f51 Verifying Checksum 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB c124ba1a8b26 Pull complete eabd8714fec9 Downloading [==========> ] 76.77MB/375MB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete 55f2b468da67 Extracting [=========> ] 47.91MB/257.9MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB dcc0c3b2850c Pull complete 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eca0188f477e Extracting [=======> ] 5.898MB/37.17MB f3a82e9f1761 Downloading [======> ] 5.963MB/44.41MB f836d47fdc4d Extracting [=======================> ] 49.58MB/107.3MB fbe227156a9a Extracting [========================================> ] 11.96MB/14.63MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB eabd8714fec9 Downloading [============> ] 91.91MB/375MB 55f2b468da67 Extracting [==========> ] 54.03MB/257.9MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB eca0188f477e Extracting [===========> ] 8.258MB/37.17MB f3a82e9f1761 Downloading [===========> ] 10.55MB/44.41MB 6394804c2196 Pull complete fbe227156a9a Extracting [==================================================>] 14.63MB/14.63MB pap Pulled da3ed5db7103 Downloading [> ] 1.621MB/127.4MB eabd8714fec9 Downloading [=============> ] 102.2MB/375MB eb7cda286a15 Pull complete f836d47fdc4d Extracting [========================> ] 52.36MB/107.3MB api Pulled 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 55f2b468da67 Extracting [===========> ] 61.83MB/257.9MB fbe227156a9a Pull complete b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB eca0188f477e Extracting [==============> ] 10.62MB/37.17MB f3a82e9f1761 Downloading [===============> ] 13.76MB/44.41MB da3ed5db7103 Downloading [=> ] 4.865MB/127.4MB 2d429b9e73a6 Pull complete eabd8714fec9 Downloading [===============> ] 116.8MB/375MB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB f836d47fdc4d Extracting [=========================> ] 55.15MB/107.3MB 55f2b468da67 Extracting [=============> ] 70.19MB/257.9MB f3a82e9f1761 Downloading [==========================> ] 23.85MB/44.41MB eca0188f477e Extracting [======================> ] 16.52MB/37.17MB b56567b07821 Pull complete da3ed5db7103 Downloading [====> ] 12.43MB/127.4MB eabd8714fec9 Downloading [=================> ] 134.6MB/375MB f243361b999b Extracting [==================================================>] 5.242kB/5.242kB f243361b999b Extracting [==================================================>] 5.242kB/5.242kB f836d47fdc4d Extracting [===========================> ] 59.05MB/107.3MB 55f2b468da67 Extracting [===============> ] 78.54MB/257.9MB f3a82e9f1761 Downloading [===========================================> ] 38.99MB/44.41MB eca0188f477e Extracting [============================> ] 21.23MB/37.17MB 46eab5b44a35 Pull complete c4d302cc468d Extracting [> ] 65.54kB/4.534MB da3ed5db7103 Downloading [=========> ] 23.79MB/127.4MB eabd8714fec9 Downloading [===================> ] 148.7MB/375MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete f243361b999b Pull complete f836d47fdc4d Extracting [=============================> ] 63.5MB/107.3MB 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 55f2b468da67 Extracting [================> ] 87.46MB/257.9MB 9fa9226be034 Downloading [> ] 15.3kB/783kB eca0188f477e Extracting [================================> ] 24.38MB/37.17MB 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB c4d302cc468d Extracting [============> ] 1.114MB/4.534MB da3ed5db7103 Downloading [==============> ] 37.85MB/127.4MB eabd8714fec9 Downloading [=====================> ] 161.7MB/375MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Download complete f836d47fdc4d Extracting [===============================> ] 66.85MB/107.3MB 55f2b468da67 Extracting [==================> ] 94.14MB/257.9MB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB eca0188f477e Extracting [======================================> ] 28.7MB/37.17MB c4d302cc468d Extracting [==============================================> ] 4.26MB/4.534MB da3ed5db7103 Downloading [====================> ] 51.9MB/127.4MB eabd8714fec9 Downloading [=======================> ] 177.3MB/375MB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 7abf0dc59d35 Pull complete 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 55f2b468da67 Extracting [===================> ] 100.8MB/257.9MB f836d47fdc4d Extracting [================================> ] 69.63MB/107.3MB 6ac0e4adf315 Downloading [===> ] 4.865MB/62.07MB c4d302cc468d Pull complete 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB eca0188f477e Extracting [=========================================> ] 30.67MB/37.17MB 9fa9226be034 Pull complete da3ed5db7103 Downloading [==========================> ] 66.5MB/127.4MB 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB eabd8714fec9 Downloading [=========================> ] 193MB/375MB 55f2b468da67 Extracting [====================> ] 108.1MB/257.9MB 6ac0e4adf315 Downloading [==========> ] 12.43MB/62.07MB f836d47fdc4d Extracting [==================================> ] 72.97MB/107.3MB da3ed5db7103 Downloading [================================> ] 82.72MB/127.4MB eca0188f477e Extracting [============================================> ] 33.42MB/37.17MB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB eabd8714fec9 Downloading [===========================> ] 209.8MB/375MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 991de477d40a Pull complete 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 6ac0e4adf315 Downloading [================> ] 21.09MB/62.07MB 55f2b468da67 Extracting [=====================> ] 112MB/257.9MB f836d47fdc4d Extracting [==================================> ] 74.65MB/107.3MB da3ed5db7103 Downloading [=====================================> ] 95.7MB/127.4MB 01e0882c90d9 Pull complete 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB eabd8714fec9 Downloading [=============================> ] 219.5MB/375MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB eca0188f477e Extracting [==============================================> ] 34.6MB/37.17MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 6ac0e4adf315 Downloading [============================> ] 35.14MB/62.07MB 55f2b468da67 Extracting [======================> ] 114.2MB/257.9MB f836d47fdc4d Extracting [====================================> ] 77.43MB/107.3MB da3ed5db7103 Downloading [==========================================> ] 108.7MB/127.4MB eabd8714fec9 Downloading [===============================> ] 234.1MB/375MB 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 5efc16ba9cdc Pull complete 6ac0e4adf315 Downloading [====================================> ] 44.87MB/62.07MB eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB 1617e25568b2 Pull complete policy-db-migrator Pulled da3ed5db7103 Downloading [==============================================> ] 118.4MB/127.4MB 55f2b468da67 Extracting [======================> ] 116.4MB/257.9MB eabd8714fec9 Downloading [================================> ] 242.2MB/375MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB f836d47fdc4d Extracting [====================================> ] 79.1MB/107.3MB 531ee2cf3c0c Extracting [==================> ] 3.047MB/8.066MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete 6ac0e4adf315 Downloading [===============================================> ] 58.93MB/62.07MB eca0188f477e Pull complete 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete e444bcd4d577 Extracting [==================================================>] 279B/279B 55f2b468da67 Extracting [=======================> ] 121.4MB/257.9MB e444bcd4d577 Extracting [==================================================>] 279B/279B eabd8714fec9 Downloading [==================================> ] 257.4MB/375MB f836d47fdc4d Extracting [======================================> ] 83MB/107.3MB 531ee2cf3c0c Extracting [===============================> ] 5.112MB/8.066MB 55f2b468da67 Extracting [========================> ] 125.3MB/257.9MB eabd8714fec9 Downloading [====================================> ] 274.7MB/375MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 531ee2cf3c0c Extracting [============================================> ] 7.176MB/8.066MB f836d47fdc4d Extracting [========================================> ] 86.34MB/107.3MB e444bcd4d577 Pull complete 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 55f2b468da67 Extracting [=========================> ] 130.4MB/257.9MB eabd8714fec9 Downloading [======================================> ] 288.2MB/375MB 6ac0e4adf315 Extracting [===> ] 3.899MB/62.07MB 531ee2cf3c0c Pull complete f836d47fdc4d Extracting [==========================================> ] 90.8MB/107.3MB ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB eabd8714fec9 Downloading [=========================================> ] 308.7MB/375MB 55f2b468da67 Extracting [==========================> ] 136.5MB/257.9MB ed54a7dee1d8 Extracting [=================================================> ] 1.18MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 6ac0e4adf315 Extracting [=====> ] 6.685MB/62.07MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB f836d47fdc4d Extracting [=============================================> ] 96.93MB/107.3MB ed54a7dee1d8 Pull complete 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B eabd8714fec9 Downloading [==========================================> ] 321.2MB/375MB 55f2b468da67 Extracting [===========================> ] 139.8MB/257.9MB f836d47fdc4d Extracting [==============================================> ] 100.3MB/107.3MB 6ac0e4adf315 Extracting [========> ] 10.58MB/62.07MB 408012a7b118 Downloading [==================================================>] 637B/637B f3b09c502777 Downloading [> ] 539.6kB/56.52MB 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete bf70c5107ab5 Verifying Checksum 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Download complete eabd8714fec9 Downloading [=============================================> ] 339MB/375MB 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Download complete 55f2b468da67 Extracting [===========================> ] 143.7MB/257.9MB 12c5c803443f Pull complete e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 6ac0e4adf315 Extracting [============> ] 15.6MB/62.07MB f3b09c502777 Downloading [========> ] 9.731MB/56.52MB f836d47fdc4d Extracting [================================================> ] 103.6MB/107.3MB eabd8714fec9 Downloading [==============================================> ] 350.4MB/375MB 55f2b468da67 Extracting [============================> ] 147.6MB/257.9MB f3b09c502777 Downloading [=====================> ] 23.79MB/56.52MB 6ac0e4adf315 Extracting [===============> ] 18.94MB/62.07MB f836d47fdc4d Extracting [================================================> ] 104.7MB/107.3MB eabd8714fec9 Downloading [================================================> ] 365MB/375MB 55f2b468da67 Extracting [=============================> ] 151MB/257.9MB f3b09c502777 Downloading [===============================> ] 35.68MB/56.52MB 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB eabd8714fec9 Downloading [=================================================> ] 373.1MB/375MB f836d47fdc4d Extracting [=================================================> ] 107MB/107.3MB f836d47fdc4d Extracting [==================================================>] 107.3MB/107.3MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete 55f2b468da67 Extracting [=============================> ] 153.7MB/257.9MB e27c75a98748 Pull complete f836d47fdc4d Pull complete f3b09c502777 Downloading [===========================================> ] 48.66MB/56.52MB 6ac0e4adf315 Extracting [======================> ] 28.41MB/62.07MB f3b09c502777 Verifying Checksum f3b09c502777 Download complete 55f2b468da67 Extracting [==============================> ] 157.6MB/257.9MB eabd8714fec9 Extracting [> ] 557.1kB/375MB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 6ac0e4adf315 Extracting [==========================> ] 32.87MB/62.07MB eabd8714fec9 Extracting [=> ] 10.03MB/375MB 55f2b468da67 Extracting [===============================> ] 161.5MB/257.9MB 8b5292c940e1 Extracting [> ] 557.1kB/63.48MB e73cb4a42719 Extracting [=> ] 3.342MB/109.1MB 6ac0e4adf315 Extracting [===================================> ] 43.45MB/62.07MB eabd8714fec9 Extracting [==> ] 17.27MB/375MB 55f2b468da67 Extracting [===============================> ] 164.9MB/257.9MB e73cb4a42719 Extracting [===> ] 6.685MB/109.1MB 6ac0e4adf315 Extracting [===========================================> ] 54.59MB/62.07MB eabd8714fec9 Extracting [==> ] 21.73MB/375MB 55f2b468da67 Extracting [================================> ] 169.3MB/257.9MB 8b5292c940e1 Extracting [=> ] 1.671MB/63.48MB e73cb4a42719 Extracting [====> ] 9.47MB/109.1MB 6ac0e4adf315 Extracting [=================================================> ] 61.28MB/62.07MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB e73cb4a42719 Extracting [=====> ] 11.14MB/109.1MB e73cb4a42719 Extracting [=====> ] 11.7MB/109.1MB 8b5292c940e1 Extracting [=> ] 2.228MB/63.48MB e73cb4a42719 Extracting [=====> ] 12.81MB/109.1MB 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB eabd8714fec9 Extracting [===> ] 23.95MB/375MB e73cb4a42719 Extracting [=======> ] 15.6MB/109.1MB 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB eabd8714fec9 Extracting [===> ] 27.85MB/375MB 6ac0e4adf315 Pull complete eabd8714fec9 Extracting [======> ] 45.68MB/375MB e73cb4a42719 Extracting [=======> ] 16.71MB/109.1MB 8b5292c940e1 Extracting [==> ] 2.785MB/63.48MB 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB eabd8714fec9 Extracting [=======> ] 55.71MB/375MB e73cb4a42719 Extracting [=========> ] 20.05MB/109.1MB eabd8714fec9 Extracting [========> ] 61.28MB/375MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB eabd8714fec9 Extracting [=========> ] 67.96MB/375MB 8b5292c940e1 Extracting [===> ] 4.456MB/63.48MB 55f2b468da67 Extracting [=================================> ] 174.4MB/257.9MB e73cb4a42719 Extracting [==========> ] 22.84MB/109.1MB f3b09c502777 Extracting [=====> ] 6.128MB/56.52MB eabd8714fec9 Extracting [=========> ] 69.63MB/375MB 55f2b468da67 Extracting [==================================> ] 176.6MB/257.9MB e73cb4a42719 Extracting [===========> ] 25.62MB/109.1MB f3b09c502777 Extracting [=======> ] 8.913MB/56.52MB eabd8714fec9 Extracting [==========> ] 77.99MB/375MB 8b5292c940e1 Extracting [===> ] 5.014MB/63.48MB e73cb4a42719 Extracting [============> ] 27.85MB/109.1MB 55f2b468da67 Extracting [==================================> ] 179.4MB/257.9MB eabd8714fec9 Extracting [===========> ] 86.9MB/375MB f3b09c502777 Extracting [==========> ] 11.7MB/56.52MB 8b5292c940e1 Extracting [======> ] 7.799MB/63.48MB e73cb4a42719 Extracting [==============> ] 31.2MB/109.1MB 55f2b468da67 Extracting [===================================> ] 182.7MB/257.9MB eabd8714fec9 Extracting [============> ] 95.26MB/375MB f3b09c502777 Extracting [============> ] 13.93MB/56.52MB 8b5292c940e1 Extracting [=======> ] 9.47MB/63.48MB eabd8714fec9 Extracting [=============> ] 102.5MB/375MB 55f2b468da67 Extracting [====================================> ] 186.6MB/257.9MB e73cb4a42719 Extracting [===============> ] 34.54MB/109.1MB f3b09c502777 Extracting [==============> ] 16.71MB/56.52MB 8b5292c940e1 Extracting [========> ] 11.14MB/63.48MB 55f2b468da67 Extracting [====================================> ] 190.5MB/257.9MB eabd8714fec9 Extracting [==============> ] 107.5MB/375MB e73cb4a42719 Extracting [=================> ] 37.88MB/109.1MB f3b09c502777 Extracting [=================> ] 19.5MB/56.52MB 8b5292c940e1 Extracting [==========> ] 13.37MB/63.48MB eabd8714fec9 Extracting [==============> ] 110.9MB/375MB e73cb4a42719 Extracting [==================> ] 41.22MB/109.1MB 55f2b468da67 Extracting [=====================================> ] 193.9MB/257.9MB f3b09c502777 Extracting [===================> ] 22.28MB/56.52MB 8b5292c940e1 Extracting [============> ] 16.15MB/63.48MB eabd8714fec9 Extracting [===============> ] 114.8MB/375MB 55f2b468da67 Extracting [=====================================> ] 195.5MB/257.9MB e73cb4a42719 Extracting [====================> ] 45.68MB/109.1MB f3b09c502777 Extracting [=======================> ] 26.18MB/56.52MB eabd8714fec9 Extracting [===============> ] 117.5MB/375MB 8b5292c940e1 Extracting [=============> ] 17.27MB/63.48MB e73cb4a42719 Extracting [=======================> ] 50.69MB/109.1MB f3b09c502777 Extracting [===========================> ] 31.2MB/56.52MB 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB eabd8714fec9 Extracting [================> ] 120.9MB/375MB 8b5292c940e1 Extracting [===============> ] 19.5MB/63.48MB f3b09c502777 Extracting [===================================> ] 40.67MB/56.52MB e73cb4a42719 Extracting [========================> ] 52.36MB/109.1MB 55f2b468da67 Extracting [======================================> ] 199.4MB/257.9MB eabd8714fec9 Extracting [================> ] 124.8MB/375MB 8b5292c940e1 Extracting [=================> ] 22.28MB/63.48MB f3b09c502777 Extracting [============================================> ] 50.14MB/56.52MB e73cb4a42719 Extracting [=========================> ] 55.15MB/109.1MB 55f2b468da67 Extracting [======================================> ] 201.1MB/257.9MB eabd8714fec9 Extracting [=================> ] 128.7MB/375MB 8b5292c940e1 Extracting [==================> ] 23.95MB/63.48MB f3b09c502777 Extracting [=================================================> ] 55.71MB/56.52MB e73cb4a42719 Extracting [==========================> ] 57.38MB/109.1MB 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB eabd8714fec9 Extracting [=================> ] 132.6MB/375MB 8b5292c940e1 Extracting [=====================> ] 26.74MB/63.48MB e73cb4a42719 Extracting [===========================> ] 59.6MB/109.1MB eabd8714fec9 Extracting [==================> ] 136.5MB/375MB 55f2b468da67 Extracting [=======================================> ] 206.1MB/257.9MB 8b5292c940e1 Extracting [=======================> ] 30.08MB/63.48MB e73cb4a42719 Extracting [=============================> ] 65.18MB/109.1MB eabd8714fec9 Extracting [==================> ] 138.7MB/375MB 8b5292c940e1 Extracting [========================> ] 31.2MB/63.48MB 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB e73cb4a42719 Extracting [==============================> ] 66.85MB/109.1MB eabd8714fec9 Extracting [===================> ] 142.6MB/375MB 55f2b468da67 Extracting [========================================> ] 209.5MB/257.9MB 8b5292c940e1 Extracting [==========================> ] 33.42MB/63.48MB e73cb4a42719 Extracting [================================> ] 71.3MB/109.1MB eabd8714fec9 Extracting [===================> ] 145.9MB/375MB 55f2b468da67 Extracting [=========================================> ] 211.7MB/257.9MB 8b5292c940e1 Extracting [============================> ] 36.21MB/63.48MB e73cb4a42719 Extracting [==================================> ] 74.65MB/109.1MB eabd8714fec9 Extracting [===================> ] 149.3MB/375MB e73cb4a42719 Extracting [===================================> ] 77.99MB/109.1MB 55f2b468da67 Extracting [=========================================> ] 213.9MB/257.9MB 8b5292c940e1 Extracting [==============================> ] 38.44MB/63.48MB eabd8714fec9 Extracting [====================> ] 152.6MB/375MB 55f2b468da67 Extracting [==========================================> ] 217.3MB/257.9MB 8b5292c940e1 Extracting [================================> ] 41.22MB/63.48MB e73cb4a42719 Extracting [=====================================> ] 81.89MB/109.1MB eabd8714fec9 Extracting [====================> ] 156MB/375MB 55f2b468da67 Extracting [==========================================> ] 221.7MB/257.9MB 8b5292c940e1 Extracting [===================================> ] 44.56MB/63.48MB e73cb4a42719 Extracting [=======================================> ] 86.34MB/109.1MB f3b09c502777 Pull complete 55f2b468da67 Extracting [===========================================> ] 223.4MB/257.9MB 8b5292c940e1 Extracting [====================================> ] 46.79MB/63.48MB e73cb4a42719 Extracting [========================================> ] 89.13MB/109.1MB eabd8714fec9 Extracting [=====================> ] 158.2MB/375MB 55f2b468da67 Extracting [===========================================> ] 223.9MB/257.9MB eabd8714fec9 Extracting [=====================> ] 162.1MB/375MB e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB 8b5292c940e1 Extracting [======================================> ] 49.02MB/63.48MB eabd8714fec9 Extracting [======================> ] 165.4MB/375MB e73cb4a42719 Extracting [===========================================> ] 94.7MB/109.1MB 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB 8b5292c940e1 Extracting [=======================================> ] 50.69MB/63.48MB 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B eabd8714fec9 Extracting [======================> ] 167.1MB/375MB e73cb4a42719 Extracting [============================================> ] 96.37MB/109.1MB 8b5292c940e1 Extracting [=========================================> ] 52.36MB/63.48MB 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB eabd8714fec9 Extracting [=======================> ] 176MB/375MB e73cb4a42719 Extracting [=============================================> ] 99.16MB/109.1MB 8b5292c940e1 Extracting [===========================================> ] 55.15MB/63.48MB eabd8714fec9 Extracting [=========================> ] 188.3MB/375MB eabd8714fec9 Extracting [===========================> ] 204.4MB/375MB 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB e73cb4a42719 Extracting [==============================================> ] 101.9MB/109.1MB eabd8714fec9 Extracting [============================> ] 216.1MB/375MB 55f2b468da67 Extracting [=============================================> ] 235.6MB/257.9MB 8b5292c940e1 Extracting [==============================================> ] 59.05MB/63.48MB eabd8714fec9 Extracting [=============================> ] 218.4MB/375MB e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB 408012a7b118 Pull complete 55f2b468da67 Extracting [=============================================> ] 236.7MB/257.9MB eabd8714fec9 Extracting [=============================> ] 220.6MB/375MB 8b5292c940e1 Extracting [==============================================> ] 59.6MB/63.48MB e73cb4a42719 Extracting [================================================> ] 105.3MB/109.1MB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB eabd8714fec9 Extracting [=============================> ] 222.3MB/375MB 55f2b468da67 Extracting [==============================================> ] 241.2MB/257.9MB 8b5292c940e1 Extracting [=================================================> ] 62.39MB/63.48MB e73cb4a42719 Extracting [================================================> ] 106.4MB/109.1MB 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB eabd8714fec9 Extracting [==============================> ] 225.1MB/375MB eabd8714fec9 Extracting [==============================> ] 228.4MB/375MB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB eabd8714fec9 Extracting [===============================> ] 235.1MB/375MB 55f2b468da67 Extracting [===============================================> ] 245.1MB/257.9MB e73cb4a42719 Extracting [=================================================> ] 108.1MB/109.1MB 44986281b8b9 Pull complete 8b5292c940e1 Pull complete 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 55f2b468da67 Extracting [================================================> ] 252.3MB/257.9MB eabd8714fec9 Extracting [===============================> ] 236.7MB/375MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB eabd8714fec9 Extracting [===============================> ] 238.4MB/375MB 55f2b468da67 Extracting [=================================================> ] 257.4MB/257.9MB eabd8714fec9 Extracting [================================> ] 242.3MB/375MB bf70c5107ab5 Pull complete 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 454a4350d439 Pull complete e73cb4a42719 Pull complete 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB eabd8714fec9 Extracting [================================> ] 245.1MB/375MB 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB eabd8714fec9 Extracting [================================> ] 246.2MB/375MB eabd8714fec9 Extracting [=================================> ] 251.2MB/375MB eabd8714fec9 Extracting [==================================> ] 256.2MB/375MB eabd8714fec9 Extracting [===================================> ] 262.9MB/375MB eabd8714fec9 Extracting [===================================> ] 267.9MB/375MB eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB eabd8714fec9 Extracting [====================================> ] 276.9MB/375MB eabd8714fec9 Extracting [=====================================> ] 281.9MB/375MB eabd8714fec9 Extracting [======================================> ] 290.2MB/375MB 55f2b468da67 Pull complete 1ccde423731d Pull complete a83b68436f09 Pull complete eabd8714fec9 Extracting [=======================================> ] 294.1MB/375MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 9a8c18aee5ea Pull complete eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB eabd8714fec9 Extracting [=======================================> ] 299.7MB/375MB eabd8714fec9 Extracting [========================================> ] 303MB/375MB eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB 82bfc142787e Extracting [> ] 98.3kB/8.613MB 82bfc142787e Extracting [==> ] 491.5kB/8.613MB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB eabd8714fec9 Extracting [=========================================> ] 312.5MB/375MB eabd8714fec9 Extracting [=========================================> ] 314.7MB/375MB eabd8714fec9 Extracting [==========================================> ] 318.6MB/375MB eabd8714fec9 Extracting [==========================================> ] 322MB/375MB 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B eabd8714fec9 Extracting [===========================================> ] 324.2MB/375MB eabd8714fec9 Extracting [===========================================> ] 327.5MB/375MB eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB eabd8714fec9 Extracting [============================================> ] 332MB/375MB eabd8714fec9 Extracting [============================================> ] 334.2MB/375MB eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB eabd8714fec9 Extracting [=============================================> ] 341.5MB/375MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB eabd8714fec9 Extracting [==============================================> ] 345.4MB/375MB eabd8714fec9 Extracting [==============================================> ] 348.2MB/375MB eabd8714fec9 Extracting [===============================================> ] 353.7MB/375MB 82bfc142787e Pull complete 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 7221d93db8a9 Pull complete 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B grafana Pulled 787d6bee9571 Pull complete 7df673c7455d Pull complete 46baca71a4ef Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B prometheus Pulled eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 13ff0988aaea Pull complete 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB eabd8714fec9 Extracting [================================================> ] 363.2MB/375MB b0e0ef7895f4 Extracting [================> ] 12.19MB/37.01MB eabd8714fec9 Extracting [=================================================> ] 369.3MB/375MB 4b82842ab819 Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B b0e0ef7895f4 Extracting [======================================> ] 28.31MB/37.01MB eabd8714fec9 Extracting [=================================================> ] 373.8MB/375MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB b0e0ef7895f4 Pull complete 7e568a0dc8fb Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB postgres Pulled c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B eabd8714fec9 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B 45fd2fec8a19 Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B 8f10199ed94b Extracting [========================> ] 4.325MB/8.768MB e040ea11fa10 Pull complete 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB f963a77d2726 Pull complete 09d5a3f70313 Extracting [======> ] 13.37MB/109.2MB f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 09d5a3f70313 Extracting [=============> ] 28.41MB/109.2MB f3a82e9f1761 Extracting [=============> ] 11.93MB/44.41MB 09d5a3f70313 Extracting [===================> ] 42.34MB/109.2MB f3a82e9f1761 Extracting [===============================> ] 27.98MB/44.41MB 09d5a3f70313 Extracting [============================> ] 61.28MB/109.2MB f3a82e9f1761 Extracting [============================================> ] 39.91MB/44.41MB 09d5a3f70313 Extracting [===================================> ] 78.54MB/109.2MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 09d5a3f70313 Extracting [===========================================> ] 94.7MB/109.2MB 09d5a3f70313 Extracting [================================================> ] 105.8MB/109.2MB 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Pull complete 9c266ba63f51 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 356f5c2c843b Pull complete 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B kafka Pulled 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Pull complete da3ed5db7103 Extracting [> ] 557.1kB/127.4MB da3ed5db7103 Extracting [======> ] 16.15MB/127.4MB da3ed5db7103 Extracting [============> ] 31.75MB/127.4MB da3ed5db7103 Extracting [==================> ] 47.91MB/127.4MB da3ed5db7103 Extracting [=========================> ] 64.62MB/127.4MB da3ed5db7103 Extracting [================================> ] 81.89MB/127.4MB da3ed5db7103 Extracting [=======================================> ] 100.3MB/127.4MB da3ed5db7103 Extracting [=============================================> ] 115.9MB/127.4MB da3ed5db7103 Extracting [===============================================> ] 122MB/127.4MB da3ed5db7103 Extracting [=================================================> ] 126.5MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container postgres Creating Container zookeeper Creating Container prometheus Creating Container postgres Created Container prometheus Created Container policy-db-migrator Creating Container grafana Creating Container zookeeper Created Container kafka Creating Container grafana Created Container kafka Created Container policy-db-migrator Created Container policy-api Creating Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-xacml-pdp Creating Container policy-xacml-pdp Created Container prometheus Starting Container postgres Starting Container zookeeper Starting Container zookeeper Started Container kafka Starting Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container prometheus Started Container grafana Starting Container grafana Started Container kafka Started Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-xacml-pdp Starting Container policy-xacml-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 1 minute for xacml-pdp to start... Checking if REST port 30004 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute Cloning into '/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/tests/models'... Building robot framework docker image sha256:d2d77c24342c15d7072fec4116c160b141cdde3a3e64bb728d85b56ecee46b14 top - 14:58:22 up 4 min, 0 users, load average: 2.32, 1.39, 0.57 Tasks: 228 total, 1 running, 150 sleeping, 0 stopped, 0 zombie %Cpu(s): 14.9 us, 3.3 sy, 0.0 ni, 78.3 id, 3.3 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.6G 21G 27M 7.1G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS f0a9bae87a4b policy-xacml-pdp 0.72% 171.7MiB / 31.41GiB 0.53% 43.7kB / 53.8kB 0B / 4.1kB 51 a15acaf3a26a policy-pap 1.25% 480.5MiB / 31.41GiB 1.49% 2.13MB / 1.06MB 0B / 139MB 68 dd9613f02557 policy-api 0.11% 553.3MiB / 31.41GiB 1.72% 1.14MB / 985kB 0B / 0B 57 455095920e36 kafka 4.13% 389.6MiB / 31.41GiB 1.21% 181kB / 170kB 0B / 639kB 83 5fd2387c509f grafana 0.12% 107MiB / 31.41GiB 0.33% 19.5MB / 174kB 0B / 31.1MB 19 93a19590a75c zookeeper 0.08% 92.48MiB / 31.41GiB 0.29% 54kB / 46.9kB 4.1kB / 557kB 63 9981a78d1373 prometheus 0.00% 20.6MiB / 31.41GiB 0.06% 62.4kB / 3.18kB 225kB / 0B 13 285c16058ed3 postgres 0.02% 86.02MiB / 31.41GiB 0.27% 2.56MB / 3.75MB 0B / 159MB 26 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | MakeTopics :: Creates the Policy topics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteXacmlPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | policy-csit | 4 tests, 4 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | policy-csit | 6 tests, 6 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-13T14:56:39.037412731Z level=info msg="Starting Grafana" version=12.0.1 commit=80658a73c5355e3ed318e5e021c0866285153b57 branch=HEAD compiled=2025-06-13T14:56:39Z grafana | logger=settings t=2025-06-13T14:56:39.037859097Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-13T14:56:39.037878907Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-13T14:56:39.037884447Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-13T14:56:39.037888887Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-13T14:56:39.037893657Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-13T14:56:39.037897537Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-13T14:56:39.037901707Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-13T14:56:39.037906497Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-13T14:56:39.037910887Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-13T14:56:39.037914538Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-13T14:56:39.037918618Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-13T14:56:39.037922658Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-13T14:56:39.037932818Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-13T14:56:39.037936678Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-13T14:56:39.037940428Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-13T14:56:39.037945378Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-13T14:56:39.037950498Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-13T14:56:39.037956838Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-13T14:56:39.038421534Z level=info msg=FeatureToggles correlations=true alertingInsights=true panelMonitoring=true formatString=true newDashboardSharingComponent=true pluginsDetailsRightPanel=true promQLScope=true dashboardSceneSolo=true azureMonitorEnableUserAuth=true preinstallAutoUpdate=true logsExploreTableVisualisation=true angularDeprecationUI=true recordedQueriesMulti=true dashgpt=true dataplaneFrontendFallback=true ssoSettingsSAML=true dashboardScene=true unifiedStorageSearchPermissionFiltering=true grafanaconThemes=true awsAsyncQueryCaching=true newPDFRendering=true alertingSimplifiedRouting=true lokiLabelNamesQueryApi=true transformationsRedesign=true alertRuleRestore=true newFiltersUI=true externalCorePlugins=true groupToNestedTableTransformation=true logsInfiniteScrolling=true azureMonitorPrometheusExemplars=true tlsMemcached=true kubernetesPlaylists=true pinNavItems=true alertingUIOptimizeReducer=true prometheusAzureOverrideAudience=true cloudWatchCrossAccountQuerying=true recoveryThreshold=true alertingQueryAndExpressionsStepMode=true influxdbBackendMigration=true useSessionStorageForRedirection=true logsContextDatasourceUi=true logRowsPopoverMenu=true annotationPermissionUpdate=true publicDashboardsScene=true kubernetesClientDashboardsFolders=true alertingRuleVersionHistoryRestore=true onPremToCloudMigrations=true cloudWatchNewLabelParsing=true addFieldFromCalculationStatFunctions=true alertingRuleRecoverDeleted=true prometheusUsesCombobox=true lokiQuerySplitting=true ssoSettingsApi=true cloudWatchRoundUpEndTime=true reportingUseRawTimeRange=true unifiedRequestLog=true lokiQueryHints=true alertingRulePermanentlyDelete=true logsPanelControls=true dashboardSceneForViewers=true alertingApiServer=true nestedFolders=true alertingNotificationsStepMode=true failWrongDSUID=true lokiStructuredMetadata=true grafana | logger=sqlstore t=2025-06-13T14:56:39.038496635Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-13T14:56:39.038517925Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-13T14:56:39.040421991Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-13T14:56:39.040440042Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-13T14:56:39.041437715Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-13T14:56:39.042719423Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.281727ms grafana | logger=migrator t=2025-06-13T14:56:39.04843808Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-13T14:56:39.049075278Z level=info msg="Migration successfully executed" id="create user table" duration=635.099µs grafana | logger=migrator t=2025-06-13T14:56:39.054290768Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-13T14:56:39.055825238Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.52141ms grafana | logger=migrator t=2025-06-13T14:56:39.062826123Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-13T14:56:39.063644464Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=819.421µs grafana | logger=migrator t=2025-06-13T14:56:39.067399765Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-13T14:56:39.068134455Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=734.29µs grafana | logger=migrator t=2025-06-13T14:56:39.071580961Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-13T14:56:39.07226162Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=679.919µs grafana | logger=migrator t=2025-06-13T14:56:39.079383726Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-13T14:56:39.081386874Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.003028ms grafana | logger=migrator t=2025-06-13T14:56:39.084540566Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-13T14:56:39.085121484Z level=info msg="Migration successfully executed" id="create user table v2" duration=580.398µs grafana | logger=migrator t=2025-06-13T14:56:39.088285936Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-13T14:56:39.088967095Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=680.969µs grafana | logger=migrator t=2025-06-13T14:56:39.094675682Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-13T14:56:39.095787777Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.088245ms grafana | logger=migrator t=2025-06-13T14:56:39.099456607Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:39.100024294Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=567.248µs grafana | logger=migrator t=2025-06-13T14:56:39.103959708Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-13T14:56:39.104414094Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=454.355µs grafana | logger=migrator t=2025-06-13T14:56:39.110518596Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-13T14:56:39.111850313Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.332077ms grafana | logger=migrator t=2025-06-13T14:56:39.11605559Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-13T14:56:39.116094821Z level=info msg="Migration successfully executed" id="Update user table charset" duration=40.121µs grafana | logger=migrator t=2025-06-13T14:56:39.119245833Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-13T14:56:39.121052848Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.809215ms grafana | logger=migrator t=2025-06-13T14:56:39.124290131Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-13T14:56:39.124489004Z level=info msg="Migration successfully executed" id="Add missing user data" duration=198.673µs grafana | logger=migrator t=2025-06-13T14:56:39.129897647Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-13T14:56:39.131517458Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.619881ms grafana | logger=migrator t=2025-06-13T14:56:39.136542006Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-13T14:56:39.137692392Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.149206ms grafana | logger=migrator t=2025-06-13T14:56:39.141498833Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-13T14:56:39.14277799Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.278377ms grafana | logger=migrator t=2025-06-13T14:56:39.146799594Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-13T14:56:39.155466101Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.663077ms grafana | logger=migrator t=2025-06-13T14:56:39.171375546Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-13T14:56:39.173321801Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.947876ms grafana | logger=migrator t=2025-06-13T14:56:39.177868653Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-13T14:56:39.178230367Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=365.284µs grafana | logger=migrator t=2025-06-13T14:56:39.18210942Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-13T14:56:39.18283011Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=719.83µs grafana | logger=migrator t=2025-06-13T14:56:39.187728606Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-13T14:56:39.188990233Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.261407ms grafana | logger=migrator t=2025-06-13T14:56:39.192888525Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-13T14:56:39.19323297Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=344.475µs grafana | logger=migrator t=2025-06-13T14:56:39.197082552Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-13T14:56:39.197847442Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=764.14µs grafana | logger=migrator t=2025-06-13T14:56:39.202093799Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-13T14:56:39.20290615Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=813.971µs grafana | logger=migrator t=2025-06-13T14:56:39.206247865Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-13T14:56:39.206825422Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=577.257µs grafana | logger=migrator t=2025-06-13T14:56:39.21027223Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-13T14:56:39.211641438Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.369158ms grafana | logger=migrator t=2025-06-13T14:56:39.217940772Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-13T14:56:39.218718073Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=776.501µs grafana | logger=migrator t=2025-06-13T14:56:39.223311995Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-13T14:56:39.224753094Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.443559ms grafana | logger=migrator t=2025-06-13T14:56:39.228673317Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-13T14:56:39.229499158Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=826.431µs grafana | logger=migrator t=2025-06-13T14:56:39.23410136Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-13T14:56:39.235273406Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.171196ms grafana | logger=migrator t=2025-06-13T14:56:39.240103971Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-13T14:56:39.240144571Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=41.79µs grafana | logger=migrator t=2025-06-13T14:56:39.244272917Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-13T14:56:39.2452025Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=932.793µs grafana | logger=migrator t=2025-06-13T14:56:39.249163953Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-13T14:56:39.249955184Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=790.721µs grafana | logger=migrator t=2025-06-13T14:56:39.255350247Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-13T14:56:39.256140967Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=789.99µs grafana | logger=migrator t=2025-06-13T14:56:39.259563923Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-13T14:56:39.260419584Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=852.161µs grafana | logger=migrator t=2025-06-13T14:56:39.269454227Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:56:39.274701457Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.24731ms grafana | logger=migrator t=2025-06-13T14:56:39.278174164Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-13T14:56:39.279249138Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.075214ms grafana | logger=migrator t=2025-06-13T14:56:39.282950749Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-13T14:56:39.28384132Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=888.531µs grafana | logger=migrator t=2025-06-13T14:56:39.286947912Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-13T14:56:39.287877244Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=928.292µs grafana | logger=migrator t=2025-06-13T14:56:39.302360789Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-13T14:56:39.303954511Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.595312ms grafana | logger=migrator t=2025-06-13T14:56:39.308217319Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-13T14:56:39.309614067Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.399208ms grafana | logger=migrator t=2025-06-13T14:56:39.313064894Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:39.313473299Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=408.336µs grafana | logger=migrator t=2025-06-13T14:56:39.319751514Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:56:39.320554275Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=802.651µs grafana | logger=migrator t=2025-06-13T14:56:39.327899394Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-13T14:56:39.328502382Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=602.558µs grafana | logger=migrator t=2025-06-13T14:56:39.332689998Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-13T14:56:39.333861964Z level=info msg="Migration successfully executed" id="create star table" duration=1.172116ms grafana | logger=migrator t=2025-06-13T14:56:39.339295427Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-13T14:56:39.340089708Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=793.451µs grafana | logger=migrator t=2025-06-13T14:56:39.343472013Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-13T14:56:39.344953284Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.480561ms grafana | logger=migrator t=2025-06-13T14:56:39.34995006Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-13T14:56:39.35139453Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.44423ms grafana | logger=migrator t=2025-06-13T14:56:39.356798133Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-13T14:56:39.359311277Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=2.510134ms grafana | logger=migrator t=2025-06-13T14:56:39.362908955Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-13T14:56:39.364375735Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=1.47022ms grafana | logger=migrator t=2025-06-13T14:56:39.368253327Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-13T14:56:39.369126759Z level=info msg="Migration successfully executed" id="create org table v1" duration=872.762µs grafana | logger=migrator t=2025-06-13T14:56:39.372538065Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-13T14:56:39.37366097Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.121445ms grafana | logger=migrator t=2025-06-13T14:56:39.381553727Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-13T14:56:39.382391888Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=837.991µs grafana | logger=migrator t=2025-06-13T14:56:39.38633024Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-13T14:56:39.38772538Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.395599ms grafana | logger=migrator t=2025-06-13T14:56:39.391624422Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-13T14:56:39.392759348Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.135416ms grafana | logger=migrator t=2025-06-13T14:56:39.395840909Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-13T14:56:39.396718Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=876.891µs grafana | logger=migrator t=2025-06-13T14:56:39.403120486Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-13T14:56:39.403280739Z level=info msg="Migration successfully executed" id="Update org table charset" duration=161.332µs grafana | logger=migrator t=2025-06-13T14:56:39.407664898Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-13T14:56:39.40781475Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=151.282µs grafana | logger=migrator t=2025-06-13T14:56:39.411473569Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-13T14:56:39.412084398Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=609.999µs grafana | logger=migrator t=2025-06-13T14:56:39.416086661Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-13T14:56:39.417744704Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.657153ms grafana | logger=migrator t=2025-06-13T14:56:39.430566246Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-13T14:56:39.432180988Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.615222ms grafana | logger=migrator t=2025-06-13T14:56:39.437412049Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-13T14:56:39.438512733Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.100014ms grafana | logger=migrator t=2025-06-13T14:56:39.441904749Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-13T14:56:39.442873033Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=968.414µs grafana | logger=migrator t=2025-06-13T14:56:39.448865283Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-13T14:56:39.449770835Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=905.162µs grafana | logger=migrator t=2025-06-13T14:56:39.453442784Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-13T14:56:39.454112524Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=669.9µs grafana | logger=migrator t=2025-06-13T14:56:39.458636344Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-13T14:56:39.465071212Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.430918ms grafana | logger=migrator t=2025-06-13T14:56:39.469510271Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-13T14:56:39.471229845Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.720483ms grafana | logger=migrator t=2025-06-13T14:56:39.476350194Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-13T14:56:39.478229668Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.878794ms grafana | logger=migrator t=2025-06-13T14:56:39.487013026Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-13T14:56:39.48796168Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=947.874µs grafana | logger=migrator t=2025-06-13T14:56:39.491763511Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:39.492216687Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=450.836µs grafana | logger=migrator t=2025-06-13T14:56:39.497885213Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-13T14:56:39.499292952Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.409149ms grafana | logger=migrator t=2025-06-13T14:56:39.503306566Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-13T14:56:39.503331916Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=26.72µs grafana | logger=migrator t=2025-06-13T14:56:39.507889378Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-13T14:56:39.510005047Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.114939ms grafana | logger=migrator t=2025-06-13T14:56:39.515647983Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-13T14:56:39.517754621Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.086467ms grafana | logger=migrator t=2025-06-13T14:56:39.521096416Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-13T14:56:39.523087163Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.989126ms grafana | logger=migrator t=2025-06-13T14:56:39.526145464Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-13T14:56:39.52734952Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.203306ms grafana | logger=migrator t=2025-06-13T14:56:39.533335891Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-13T14:56:39.537231303Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.902212ms grafana | logger=migrator t=2025-06-13T14:56:39.543072732Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-13T14:56:39.544161997Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.088565ms grafana | logger=migrator t=2025-06-13T14:56:39.558117115Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-13T14:56:39.559804857Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.688422ms grafana | logger=migrator t=2025-06-13T14:56:39.576305699Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-13T14:56:39.57634932Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=45.371µs grafana | logger=migrator t=2025-06-13T14:56:39.581781483Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-13T14:56:39.581822734Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=42.671µs grafana | logger=migrator t=2025-06-13T14:56:39.588623676Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-13T14:56:39.594166251Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=5.541975ms grafana | logger=migrator t=2025-06-13T14:56:39.606221533Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-13T14:56:39.610890316Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=4.666853ms grafana | logger=migrator t=2025-06-13T14:56:39.61572666Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-13T14:56:39.617793338Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.066518ms grafana | logger=migrator t=2025-06-13T14:56:39.621524938Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-13T14:56:39.623592796Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.066978ms grafana | logger=migrator t=2025-06-13T14:56:39.628818897Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-13T14:56:39.6290613Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=242.223µs grafana | logger=migrator t=2025-06-13T14:56:39.633300248Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:39.635917723Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=2.621145ms grafana | logger=migrator t=2025-06-13T14:56:39.641711741Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-13T14:56:39.64319161Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.486299ms grafana | logger=migrator t=2025-06-13T14:56:39.649909602Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-13T14:56:39.650092324Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=183.423µs grafana | logger=migrator t=2025-06-13T14:56:39.654139318Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-13T14:56:39.655681519Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.540961ms grafana | logger=migrator t=2025-06-13T14:56:39.659965956Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-13T14:56:39.661532538Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.567932ms grafana | logger=migrator t=2025-06-13T14:56:39.685369128Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:56:39.690715951Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.346503ms grafana | logger=migrator t=2025-06-13T14:56:39.694591232Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-13T14:56:39.695265822Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=671.8µs grafana | logger=migrator t=2025-06-13T14:56:39.700525583Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-13T14:56:39.701309494Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=783.12µs grafana | logger=migrator t=2025-06-13T14:56:39.704671149Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-13T14:56:39.70551724Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=866.112µs grafana | logger=migrator t=2025-06-13T14:56:39.709072918Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:39.709472144Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=396.185µs grafana | logger=migrator t=2025-06-13T14:56:39.716183814Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:56:39.717238478Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.048675ms grafana | logger=migrator t=2025-06-13T14:56:39.72183861Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-13T14:56:39.724153711Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.315051ms grafana | logger=migrator t=2025-06-13T14:56:39.72708276Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-13T14:56:39.728031703Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=947.853µs grafana | logger=migrator t=2025-06-13T14:56:39.731548091Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-13T14:56:39.731716503Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=168.312µs grafana | logger=migrator t=2025-06-13T14:56:39.736467627Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-13T14:56:39.736649339Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=181.502µs grafana | logger=migrator t=2025-06-13T14:56:39.73963653Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-13T14:56:39.74041012Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=773.11µs grafana | logger=migrator t=2025-06-13T14:56:39.745050622Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-13T14:56:39.747327143Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.276021ms grafana | logger=migrator t=2025-06-13T14:56:39.752914628Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-13T14:56:39.755820647Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.905269ms grafana | logger=migrator t=2025-06-13T14:56:39.759401006Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-13T14:56:39.760210367Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=810.561µs grafana | logger=migrator t=2025-06-13T14:56:39.763700464Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-13T14:56:39.766102186Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.401202ms grafana | logger=migrator t=2025-06-13T14:56:39.769530332Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-13T14:56:39.771798503Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.267251ms grafana | logger=migrator t=2025-06-13T14:56:39.776822901Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-13T14:56:39.777294707Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=471.296µs grafana | logger=migrator t=2025-06-13T14:56:39.780451489Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-13T14:56:39.78273291Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.280761ms grafana | logger=migrator t=2025-06-13T14:56:39.788610019Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-13T14:56:39.789476351Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=866.412µs grafana | logger=migrator t=2025-06-13T14:56:39.803112304Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-13T14:56:39.804177619Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=1.063455ms grafana | logger=migrator t=2025-06-13T14:56:39.814585299Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-13T14:56:39.815526132Z level=info msg="Migration successfully executed" id="create data_source table" duration=940.703µs grafana | logger=migrator t=2025-06-13T14:56:39.818944238Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-13T14:56:39.820037712Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.122654ms grafana | logger=migrator t=2025-06-13T14:56:39.825546107Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-13T14:56:39.826387999Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=841.282µs grafana | logger=migrator t=2025-06-13T14:56:39.83022188Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-13T14:56:39.83101548Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=793.45µs grafana | logger=migrator t=2025-06-13T14:56:39.834230743Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-13T14:56:39.834971544Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=740.701µs grafana | logger=migrator t=2025-06-13T14:56:39.842069729Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-13T14:56:39.851974423Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.906664ms grafana | logger=migrator t=2025-06-13T14:56:39.855361739Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-13T14:56:39.856444873Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.082304ms grafana | logger=migrator t=2025-06-13T14:56:39.859792438Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-13T14:56:39.860370716Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=578.088µs grafana | logger=migrator t=2025-06-13T14:56:39.866790842Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-13T14:56:39.86816479Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.376838ms grafana | logger=migrator t=2025-06-13T14:56:39.875214936Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-13T14:56:39.875974396Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=759.71µs grafana | logger=migrator t=2025-06-13T14:56:39.881742504Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-13T14:56:39.884117086Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.372763ms grafana | logger=migrator t=2025-06-13T14:56:39.889188364Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-13T14:56:39.891786659Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.597905ms grafana | logger=migrator t=2025-06-13T14:56:39.895827023Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-13T14:56:39.895851764Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=25.381µs grafana | logger=migrator t=2025-06-13T14:56:39.901955436Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-13T14:56:39.902150279Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=195.263µs grafana | logger=migrator t=2025-06-13T14:56:39.905589655Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-13T14:56:39.90967972Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.088265ms grafana | logger=migrator t=2025-06-13T14:56:39.914791129Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-13T14:56:39.915189424Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=399.055µs grafana | logger=migrator t=2025-06-13T14:56:39.919894438Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-13T14:56:39.920183412Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=289.284µs grafana | logger=migrator t=2025-06-13T14:56:39.925302721Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-13T14:56:39.927675833Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.372582ms grafana | logger=migrator t=2025-06-13T14:56:39.940350913Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-13T14:56:39.94085281Z level=info msg="Migration successfully executed" id="Update uid value" duration=501.617µs grafana | logger=migrator t=2025-06-13T14:56:39.946010929Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:39.947046564Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.036245ms grafana | logger=migrator t=2025-06-13T14:56:39.950305448Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-13T14:56:39.951742056Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.435068ms grafana | logger=migrator t=2025-06-13T14:56:39.955716061Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-13T14:56:39.958373636Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.656816ms grafana | logger=migrator t=2025-06-13T14:56:39.961758912Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-13T14:56:39.964421638Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.662396ms grafana | logger=migrator t=2025-06-13T14:56:39.970909445Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-13T14:56:39.970992806Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=84.131µs grafana | logger=migrator t=2025-06-13T14:56:39.976936236Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-13T14:56:39.977801168Z level=info msg="Migration successfully executed" id="create api_key table" duration=864.622µs grafana | logger=migrator t=2025-06-13T14:56:39.98170222Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-13T14:56:39.982580552Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=877.922µs grafana | logger=migrator t=2025-06-13T14:56:39.986292472Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-13T14:56:39.987143524Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=850.742µs grafana | logger=migrator t=2025-06-13T14:56:39.995858601Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-13T14:56:39.997088337Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.229556ms grafana | logger=migrator t=2025-06-13T14:56:40.00100942Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-13T14:56:40.001858602Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=849.052µs grafana | logger=migrator t=2025-06-13T14:56:40.005058454Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-13T14:56:40.005917616Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=858.912µs grafana | logger=migrator t=2025-06-13T14:56:40.01145127Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-13T14:56:40.012342941Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=891.532µs grafana | logger=migrator t=2025-06-13T14:56:40.01604944Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-13T14:56:40.023374277Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.324237ms grafana | logger=migrator t=2025-06-13T14:56:40.02962642Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-13T14:56:40.030237317Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=610.737µs grafana | logger=migrator t=2025-06-13T14:56:40.033584211Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-13T14:56:40.034533225Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=948.814µs grafana | logger=migrator t=2025-06-13T14:56:40.0379888Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-13T14:56:40.038975833Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=989.113µs grafana | logger=migrator t=2025-06-13T14:56:40.045235566Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-13T14:56:40.046201209Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=965.413µs grafana | logger=migrator t=2025-06-13T14:56:40.050693949Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:40.051164085Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=469.576µs grafana | logger=migrator t=2025-06-13T14:56:40.054501049Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-13T14:56:40.055176527Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=674.648µs grafana | logger=migrator t=2025-06-13T14:56:40.074774607Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-13T14:56:40.075679249Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=905.162µs grafana | logger=migrator t=2025-06-13T14:56:40.079336508Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-13T14:56:40.082340977Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.00363ms grafana | logger=migrator t=2025-06-13T14:56:40.085904694Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-13T14:56:40.088760222Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.857168ms grafana | logger=migrator t=2025-06-13T14:56:40.093636187Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-13T14:56:40.093857779Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=221.832µs grafana | logger=migrator t=2025-06-13T14:56:40.097853742Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-13T14:56:40.100923303Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.068821ms grafana | logger=migrator t=2025-06-13T14:56:40.104392379Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-13T14:56:40.106994734Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.601735ms grafana | logger=migrator t=2025-06-13T14:56:40.111962459Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-13T14:56:40.113252696Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.290247ms grafana | logger=migrator t=2025-06-13T14:56:40.116938355Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-13T14:56:40.117830767Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=891.822µs grafana | logger=migrator t=2025-06-13T14:56:40.121515076Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-13T14:56:40.122478178Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=959.672µs grafana | logger=migrator t=2025-06-13T14:56:40.128355766Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-13T14:56:40.129142777Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=784.271µs grafana | logger=migrator t=2025-06-13T14:56:40.132218657Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-13T14:56:40.132972807Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=753.83µs grafana | logger=migrator t=2025-06-13T14:56:40.136177139Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-13T14:56:40.137103671Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=926.082µs grafana | logger=migrator t=2025-06-13T14:56:40.141294857Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-13T14:56:40.141313837Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=19.54µs grafana | logger=migrator t=2025-06-13T14:56:40.144159775Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-13T14:56:40.144181275Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=22.38µs grafana | logger=migrator t=2025-06-13T14:56:40.14758116Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-13T14:56:40.150700641Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.118901ms grafana | logger=migrator t=2025-06-13T14:56:40.154825076Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-13T14:56:40.157828476Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.00272ms grafana | logger=migrator t=2025-06-13T14:56:40.161243381Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-13T14:56:40.161264821Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=22.47µs grafana | logger=migrator t=2025-06-13T14:56:40.167805277Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-13T14:56:40.169017494Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.212177ms grafana | logger=migrator t=2025-06-13T14:56:40.173388571Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-13T14:56:40.174338755Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=952.544µs grafana | logger=migrator t=2025-06-13T14:56:40.178932445Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-13T14:56:40.178947605Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=15.7µs grafana | logger=migrator t=2025-06-13T14:56:40.183525296Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-13T14:56:40.184135404Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=609.988µs grafana | logger=migrator t=2025-06-13T14:56:40.196455977Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-13T14:56:40.198316711Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.860034ms grafana | logger=migrator t=2025-06-13T14:56:40.203133325Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-13T14:56:40.206114275Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.98024ms grafana | logger=migrator t=2025-06-13T14:56:40.21102405Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-13T14:56:40.2110478Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=24.39µs grafana | logger=migrator t=2025-06-13T14:56:40.214876321Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-13T14:56:40.215220145Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=342.705µs grafana | logger=migrator t=2025-06-13T14:56:40.220016509Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-13T14:56:40.230998804Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=10.981755ms grafana | logger=migrator t=2025-06-13T14:56:40.238606035Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-13T14:56:40.239377265Z level=info msg="Migration successfully executed" id="create session table" duration=772.73µs grafana | logger=migrator t=2025-06-13T14:56:40.242986872Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-13T14:56:40.243175075Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=185.513µs grafana | logger=migrator t=2025-06-13T14:56:40.247599754Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-13T14:56:40.247677575Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=77.981µs grafana | logger=migrator t=2025-06-13T14:56:40.250951688Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-13T14:56:40.251693748Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=741.659µs grafana | logger=migrator t=2025-06-13T14:56:40.257077959Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-13T14:56:40.25783176Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=753.401µs grafana | logger=migrator t=2025-06-13T14:56:40.262051475Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-13T14:56:40.262072685Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=21.96µs grafana | logger=migrator t=2025-06-13T14:56:40.267035381Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-13T14:56:40.267068371Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=34.08µs grafana | logger=migrator t=2025-06-13T14:56:40.27155918Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-13T14:56:40.278400211Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=6.837991ms grafana | logger=migrator t=2025-06-13T14:56:40.29042526Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-13T14:56:40.295142313Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=4.717713ms grafana | logger=migrator t=2025-06-13T14:56:40.300388912Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-13T14:56:40.300554475Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=170.653µs grafana | logger=migrator t=2025-06-13T14:56:40.304603738Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-13T14:56:40.30476802Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=164.852µs grafana | logger=migrator t=2025-06-13T14:56:40.309827877Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-13T14:56:40.31080346Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=977.702µs grafana | logger=migrator t=2025-06-13T14:56:40.323463258Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-13T14:56:40.323509368Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=47.401µs grafana | logger=migrator t=2025-06-13T14:56:40.327676303Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-13T14:56:40.334571185Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=6.893691ms grafana | logger=migrator t=2025-06-13T14:56:40.338935722Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-13T14:56:40.339039594Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=104.012µs grafana | logger=migrator t=2025-06-13T14:56:40.342068124Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-13T14:56:40.344468045Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.398851ms grafana | logger=migrator t=2025-06-13T14:56:40.349025976Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-13T14:56:40.35240618Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.376934ms grafana | logger=migrator t=2025-06-13T14:56:40.357951934Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-13T14:56:40.358250377Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=305.564µs grafana | logger=migrator t=2025-06-13T14:56:40.365125659Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-13T14:56:40.366314324Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.185015ms grafana | logger=migrator t=2025-06-13T14:56:40.37046533Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-13T14:56:40.371582404Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.116814ms grafana | logger=migrator t=2025-06-13T14:56:40.375532527Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-13T14:56:40.377286359Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.752792ms grafana | logger=migrator t=2025-06-13T14:56:40.384939841Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-13T14:56:40.386928918Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.988007ms grafana | logger=migrator t=2025-06-13T14:56:40.392615183Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-13T14:56:40.393336282Z level=info msg="Migration successfully executed" id="add index alert state" duration=720.969µs grafana | logger=migrator t=2025-06-13T14:56:40.396585445Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-13T14:56:40.397277514Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=692.039µs grafana | logger=migrator t=2025-06-13T14:56:40.405418272Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-13T14:56:40.406528447Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.109845ms grafana | logger=migrator t=2025-06-13T14:56:40.412423265Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-13T14:56:40.414169897Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.746482ms grafana | logger=migrator t=2025-06-13T14:56:40.418236212Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-13T14:56:40.419247815Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.011663ms grafana | logger=migrator t=2025-06-13T14:56:40.42723288Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-13T14:56:40.44231781Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=15.08279ms grafana | logger=migrator t=2025-06-13T14:56:40.448115426Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-13T14:56:40.44909064Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=975.414µs grafana | logger=migrator t=2025-06-13T14:56:40.452760428Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-13T14:56:40.453833673Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.072595ms grafana | logger=migrator t=2025-06-13T14:56:40.46198637Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:40.462539378Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=552.848µs grafana | logger=migrator t=2025-06-13T14:56:40.468059421Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-13T14:56:40.469648102Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.59086ms grafana | logger=migrator t=2025-06-13T14:56:40.473975949Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-13T14:56:40.475001592Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.025253ms grafana | logger=migrator t=2025-06-13T14:56:40.480727778Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-13T14:56:40.484784152Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.055504ms grafana | logger=migrator t=2025-06-13T14:56:40.488995298Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-13T14:56:40.494156306Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.139898ms grafana | logger=migrator t=2025-06-13T14:56:40.498132838Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-13T14:56:40.502184992Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.051534ms grafana | logger=migrator t=2025-06-13T14:56:40.50885469Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-13T14:56:40.512976235Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.120915ms grafana | logger=migrator t=2025-06-13T14:56:40.536441335Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-13T14:56:40.537358547Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=916.392µs grafana | logger=migrator t=2025-06-13T14:56:40.540265436Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-13T14:56:40.540286456Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=21.92µs grafana | logger=migrator t=2025-06-13T14:56:40.5443617Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-13T14:56:40.544383101Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=21.701µs grafana | logger=migrator t=2025-06-13T14:56:40.549666261Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-13T14:56:40.550801305Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.132244ms grafana | logger=migrator t=2025-06-13T14:56:40.554813139Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-13T14:56:40.556129456Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.315607ms grafana | logger=migrator t=2025-06-13T14:56:40.57456369Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-13T14:56:40.57831886Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=3.75752ms grafana | logger=migrator t=2025-06-13T14:56:40.593748003Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-13T14:56:40.594498744Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=750.611µs grafana | logger=migrator t=2025-06-13T14:56:40.600813777Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-13T14:56:40.602020903Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.207166ms grafana | logger=migrator t=2025-06-13T14:56:40.607960202Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-13T14:56:40.612225458Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.265516ms grafana | logger=migrator t=2025-06-13T14:56:40.616782549Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-13T14:56:40.620933993Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.150804ms grafana | logger=migrator t=2025-06-13T14:56:40.628396852Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-13T14:56:40.628635515Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=238.703µs grafana | logger=migrator t=2025-06-13T14:56:40.633285667Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:40.634149638Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=863.551µs grafana | logger=migrator t=2025-06-13T14:56:40.639840364Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-13T14:56:40.640603574Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=763.259µs grafana | logger=migrator t=2025-06-13T14:56:40.647146821Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-13T14:56:40.65092369Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.776529ms grafana | logger=migrator t=2025-06-13T14:56:40.656489334Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-13T14:56:40.656544775Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=56.491µs grafana | logger=migrator t=2025-06-13T14:56:40.660276804Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-13T14:56:40.661212917Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=936.203µs grafana | logger=migrator t=2025-06-13T14:56:40.665781957Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-13T14:56:40.666829621Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.046654ms grafana | logger=migrator t=2025-06-13T14:56:40.677912877Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-13T14:56:40.67806457Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=158.513µs grafana | logger=migrator t=2025-06-13T14:56:40.682547859Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-13T14:56:40.683792286Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.242997ms grafana | logger=migrator t=2025-06-13T14:56:40.699143608Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-13T14:56:40.701210676Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=2.066038ms grafana | logger=migrator t=2025-06-13T14:56:40.705959749Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-13T14:56:40.707045033Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.084684ms grafana | logger=migrator t=2025-06-13T14:56:40.710784362Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-13T14:56:40.711878067Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.093285ms grafana | logger=migrator t=2025-06-13T14:56:40.718874229Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-13T14:56:40.720061505Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.186886ms grafana | logger=migrator t=2025-06-13T14:56:40.727153529Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-13T14:56:40.7280157Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=861.771µs grafana | logger=migrator t=2025-06-13T14:56:40.734644278Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-13T14:56:40.734683159Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=40.251µs grafana | logger=migrator t=2025-06-13T14:56:40.741364567Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-13T14:56:40.747305865Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=5.941188ms grafana | logger=migrator t=2025-06-13T14:56:40.750869613Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-13T14:56:40.752414413Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.54494ms grafana | logger=migrator t=2025-06-13T14:56:40.757918166Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-13T14:56:40.762631809Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.712443ms grafana | logger=migrator t=2025-06-13T14:56:40.767160338Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-13T14:56:40.768303304Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.141616ms grafana | logger=migrator t=2025-06-13T14:56:40.772535929Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-13T14:56:40.773710275Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.174266ms grafana | logger=migrator t=2025-06-13T14:56:40.783078429Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-13T14:56:40.784707831Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.630022ms grafana | logger=migrator t=2025-06-13T14:56:40.790368166Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-13T14:56:40.804480862Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.110446ms grafana | logger=migrator t=2025-06-13T14:56:40.808623227Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-13T14:56:40.809554019Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=930.422µs grafana | logger=migrator t=2025-06-13T14:56:40.825736723Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-13T14:56:40.827067901Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.336158ms grafana | logger=migrator t=2025-06-13T14:56:40.839172881Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-13T14:56:40.839654747Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=481.456µs grafana | logger=migrator t=2025-06-13T14:56:40.843388517Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-13T14:56:40.844040006Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=647.449µs grafana | logger=migrator t=2025-06-13T14:56:40.84820389Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-13T14:56:40.848419033Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=214.993µs grafana | logger=migrator t=2025-06-13T14:56:40.852531148Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-13T14:56:40.857006687Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.474579ms grafana | logger=migrator t=2025-06-13T14:56:40.864311124Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-13T14:56:40.870174481Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=5.863367ms grafana | logger=migrator t=2025-06-13T14:56:40.873706608Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-13T14:56:40.874650831Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=939.963µs grafana | logger=migrator t=2025-06-13T14:56:40.879007238Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-13T14:56:40.880158774Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.150926ms grafana | logger=migrator t=2025-06-13T14:56:40.886499367Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-13T14:56:40.886771701Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=271.874µs grafana | logger=migrator t=2025-06-13T14:56:40.901182951Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-13T14:56:40.90860959Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=7.425809ms grafana | logger=migrator t=2025-06-13T14:56:40.915679883Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-13T14:56:40.916380163Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=700.5µs grafana | logger=migrator t=2025-06-13T14:56:40.923061121Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-13T14:56:40.923417096Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=353.755µs grafana | logger=migrator t=2025-06-13T14:56:40.928569424Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-13T14:56:40.929498746Z level=info msg="Migration successfully executed" id="Move region to single row" duration=925.022µs grafana | logger=migrator t=2025-06-13T14:56:40.934127557Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-13T14:56:40.935670338Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.544211ms grafana | logger=migrator t=2025-06-13T14:56:40.941303873Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-13T14:56:40.942099663Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=795.49µs grafana | logger=migrator t=2025-06-13T14:56:40.955922406Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-13T14:56:40.957485386Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.56242ms grafana | logger=migrator t=2025-06-13T14:56:40.964946975Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-13T14:56:40.966237212Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.289437ms grafana | logger=migrator t=2025-06-13T14:56:40.971175257Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-13T14:56:40.972064209Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=888.662µs grafana | logger=migrator t=2025-06-13T14:56:40.975389904Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-13T14:56:40.976240385Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=850.531µs grafana | logger=migrator t=2025-06-13T14:56:40.980335638Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-13T14:56:40.980352699Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=17.961µs grafana | logger=migrator t=2025-06-13T14:56:40.985658139Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-13T14:56:40.98572435Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=64.861µs grafana | logger=migrator t=2025-06-13T14:56:40.992558641Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-13T14:56:40.992589071Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=28µs grafana | logger=migrator t=2025-06-13T14:56:40.996628034Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-13T14:56:40.997558147Z level=info msg="Migration successfully executed" id="create test_data table" duration=929.653µs grafana | logger=migrator t=2025-06-13T14:56:41.002638644Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-13T14:56:41.00383481Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.193597ms grafana | logger=migrator t=2025-06-13T14:56:41.007897069Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-13T14:56:41.009349821Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.452412ms grafana | logger=migrator t=2025-06-13T14:56:41.016252915Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-13T14:56:41.017815659Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.558174ms grafana | logger=migrator t=2025-06-13T14:56:41.021342052Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-13T14:56:41.021540215Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=201.603µs grafana | logger=migrator t=2025-06-13T14:56:41.023905981Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-13T14:56:41.024265707Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=356.896µs grafana | logger=migrator t=2025-06-13T14:56:41.026541151Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-13T14:56:41.026556761Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=16.37µs grafana | logger=migrator t=2025-06-13T14:56:41.032272238Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-13T14:56:41.037808282Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=5.532534ms grafana | logger=migrator t=2025-06-13T14:56:41.043183273Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-13T14:56:41.044363511Z level=info msg="Migration successfully executed" id="create team table" duration=1.181348ms grafana | logger=migrator t=2025-06-13T14:56:41.048645046Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-13T14:56:41.04956327Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=918.324µs grafana | logger=migrator t=2025-06-13T14:56:41.055343527Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-13T14:56:41.05681414Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.469993ms grafana | logger=migrator t=2025-06-13T14:56:41.061879537Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-13T14:56:41.067550802Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.667285ms grafana | logger=migrator t=2025-06-13T14:56:41.082380627Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-13T14:56:41.082790103Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=409.586µs grafana | logger=migrator t=2025-06-13T14:56:41.089685427Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:41.090946366Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.261979ms grafana | logger=migrator t=2025-06-13T14:56:41.098511961Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-13T14:56:41.106250608Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=7.736516ms grafana | logger=migrator t=2025-06-13T14:56:41.113516518Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-13T14:56:41.119791183Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=6.270065ms grafana | logger=migrator t=2025-06-13T14:56:41.123907286Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-13T14:56:41.124768498Z level=info msg="Migration successfully executed" id="create team member table" duration=860.813µs grafana | logger=migrator t=2025-06-13T14:56:41.129444449Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-13T14:56:41.130561236Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.116357ms grafana | logger=migrator t=2025-06-13T14:56:41.138466056Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-13T14:56:41.139573582Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.107146ms grafana | logger=migrator t=2025-06-13T14:56:41.146182193Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-13T14:56:41.147924219Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.742195ms grafana | logger=migrator t=2025-06-13T14:56:41.158402837Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-13T14:56:41.166885486Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=8.482049ms grafana | logger=migrator t=2025-06-13T14:56:41.171326673Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-13T14:56:41.18106916Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=9.740817ms grafana | logger=migrator t=2025-06-13T14:56:41.186649935Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-13T14:56:41.191826143Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.176018ms grafana | logger=migrator t=2025-06-13T14:56:41.209576872Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-13T14:56:41.211117086Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=1.534413ms grafana | logger=migrator t=2025-06-13T14:56:41.215532542Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-13T14:56:41.216921393Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.384491ms grafana | logger=migrator t=2025-06-13T14:56:41.220527218Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-13T14:56:41.22137745Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=849.992µs grafana | logger=migrator t=2025-06-13T14:56:41.226597599Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-13T14:56:41.228131643Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.533614ms grafana | logger=migrator t=2025-06-13T14:56:41.231452163Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-13T14:56:41.233003947Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.550564ms grafana | logger=migrator t=2025-06-13T14:56:41.237526635Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-13T14:56:41.238388899Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=861.834µs grafana | logger=migrator t=2025-06-13T14:56:41.246566322Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-13T14:56:41.248115516Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.544943ms grafana | logger=migrator t=2025-06-13T14:56:41.251809191Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-13T14:56:41.252795337Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=985.456µs grafana | logger=migrator t=2025-06-13T14:56:41.255920414Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-13T14:56:41.256803587Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=882.683µs grafana | logger=migrator t=2025-06-13T14:56:41.265737862Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-13T14:56:41.266173568Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=435.446µs grafana | logger=migrator t=2025-06-13T14:56:41.271411608Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-13T14:56:41.271757643Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=345.625µs grafana | logger=migrator t=2025-06-13T14:56:41.27684671Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-13T14:56:41.277969148Z level=info msg="Migration successfully executed" id="create tag table" duration=1.121478ms grafana | logger=migrator t=2025-06-13T14:56:41.281260237Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-13T14:56:41.28213469Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=874.013µs grafana | logger=migrator t=2025-06-13T14:56:41.285286398Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-13T14:56:41.28600187Z level=info msg="Migration successfully executed" id="create login attempt table" duration=717.612µs grafana | logger=migrator t=2025-06-13T14:56:41.290852612Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-13T14:56:41.291800347Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=947.265µs grafana | logger=migrator t=2025-06-13T14:56:41.295029136Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-13T14:56:41.295886309Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=856.773µs grafana | logger=migrator t=2025-06-13T14:56:41.29991652Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:56:41.314134715Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.216525ms grafana | logger=migrator t=2025-06-13T14:56:41.319312803Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-13T14:56:41.319865582Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=552.669µs grafana | logger=migrator t=2025-06-13T14:56:41.334061107Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-13T14:56:41.335047632Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=988.445µs grafana | logger=migrator t=2025-06-13T14:56:41.34092215Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:41.341234825Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=312.525µs grafana | logger=migrator t=2025-06-13T14:56:41.345386738Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:56:41.345992267Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=602.069µs grafana | logger=migrator t=2025-06-13T14:56:41.354600838Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-13T14:56:41.357789756Z level=info msg="Migration successfully executed" id="create user auth table" duration=3.182388ms grafana | logger=migrator t=2025-06-13T14:56:41.370198414Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-13T14:56:41.371387252Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.188148ms grafana | logger=migrator t=2025-06-13T14:56:41.378264685Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-13T14:56:41.378288727Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=25.332µs grafana | logger=migrator t=2025-06-13T14:56:41.383991983Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-13T14:56:41.391986743Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.99158ms grafana | logger=migrator t=2025-06-13T14:56:41.396195607Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-13T14:56:41.401107312Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.910795ms grafana | logger=migrator t=2025-06-13T14:56:41.409293286Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-13T14:56:41.422874351Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=13.579955ms grafana | logger=migrator t=2025-06-13T14:56:41.426632908Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-13T14:56:41.431242897Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.609079ms grafana | logger=migrator t=2025-06-13T14:56:41.436891023Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-13T14:56:41.437776887Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=886.464µs grafana | logger=migrator t=2025-06-13T14:56:41.444889244Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-13T14:56:41.450744663Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.857139ms grafana | logger=migrator t=2025-06-13T14:56:41.488635187Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-13T14:56:41.494621927Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=5.98442ms grafana | logger=migrator t=2025-06-13T14:56:41.504196012Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-13T14:56:41.505649554Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.451472ms grafana | logger=migrator t=2025-06-13T14:56:41.512815903Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-13T14:56:41.513846768Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.030325ms grafana | logger=migrator t=2025-06-13T14:56:41.521126908Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-13T14:56:41.522318957Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.194919ms grafana | logger=migrator t=2025-06-13T14:56:41.536051845Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-13T14:56:41.537132841Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.082116ms grafana | logger=migrator t=2025-06-13T14:56:41.54958555Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-13T14:56:41.551893334Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=2.308174ms grafana | logger=migrator t=2025-06-13T14:56:41.559473579Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-13T14:56:41.560591686Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.118157ms grafana | logger=migrator t=2025-06-13T14:56:41.565987468Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-13T14:56:41.574042929Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.050861ms grafana | logger=migrator t=2025-06-13T14:56:41.58663857Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-13T14:56:41.587753117Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.115457ms grafana | logger=migrator t=2025-06-13T14:56:41.595638197Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-13T14:56:41.605062619Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=9.420093ms grafana | logger=migrator t=2025-06-13T14:56:41.613935974Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-13T14:56:41.614824017Z level=info msg="Migration successfully executed" id="create cache_data table" duration=888.103µs grafana | logger=migrator t=2025-06-13T14:56:41.621764862Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-13T14:56:41.622715997Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=951.465µs grafana | logger=migrator t=2025-06-13T14:56:41.627367297Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-13T14:56:41.628764098Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.395952ms grafana | logger=migrator t=2025-06-13T14:56:41.632769058Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-13T14:56:41.633709353Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=940.035µs grafana | logger=migrator t=2025-06-13T14:56:41.641153715Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-13T14:56:41.641187276Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=35.761µs grafana | logger=migrator t=2025-06-13T14:56:41.64676781Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-13T14:56:41.646881652Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=114.492µs grafana | logger=migrator t=2025-06-13T14:56:41.650030911Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-13T14:56:41.65132866Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.29615ms grafana | logger=migrator t=2025-06-13T14:56:41.657118567Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T14:56:41.659742147Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=2.62451ms grafana | logger=migrator t=2025-06-13T14:56:41.666459688Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T14:56:41.667562906Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.102738ms grafana | logger=migrator t=2025-06-13T14:56:41.67184923Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T14:56:41.67187481Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=27.43µs grafana | logger=migrator t=2025-06-13T14:56:41.678352489Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T14:56:41.679964973Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.612464ms grafana | logger=migrator t=2025-06-13T14:56:41.683859333Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T14:56:41.685366325Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.507072ms grafana | logger=migrator t=2025-06-13T14:56:41.690198098Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T14:56:41.691977055Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.779657ms grafana | logger=migrator t=2025-06-13T14:56:41.699108413Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T14:56:41.70027503Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.166907ms grafana | logger=migrator t=2025-06-13T14:56:41.703723833Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-13T14:56:41.713535171Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=9.808948ms grafana | logger=migrator t=2025-06-13T14:56:41.722173442Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-13T14:56:41.723197108Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.025536ms grafana | logger=migrator t=2025-06-13T14:56:41.736122083Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-13T14:56:41.736414057Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=291.594µs grafana | logger=migrator t=2025-06-13T14:56:41.7425011Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-13T14:56:41.744610342Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=2.108502ms grafana | logger=migrator t=2025-06-13T14:56:41.98760042Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-13T14:56:41.990088818Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=2.489298ms grafana | logger=migrator t=2025-06-13T14:56:42.362034358Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-13T14:56:42.364610534Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=2.573206ms grafana | logger=migrator t=2025-06-13T14:56:42.753345378Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T14:56:42.753420009Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=78.611µs grafana | logger=migrator t=2025-06-13T14:56:42.962512215Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-13T14:56:42.96359578Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.085605ms grafana | logger=migrator t=2025-06-13T14:56:42.981013678Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-13T14:56:42.983048656Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=2.034418ms grafana | logger=migrator t=2025-06-13T14:56:42.990073116Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-13T14:56:42.991134201Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.060655ms grafana | logger=migrator t=2025-06-13T14:56:42.996337945Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-13T14:56:42.997889777Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.548242ms grafana | logger=migrator t=2025-06-13T14:56:43.002306049Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-13T14:56:43.00900485Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.697891ms grafana | logger=migrator t=2025-06-13T14:56:43.019031508Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T14:56:43.020830846Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.796908ms grafana | logger=migrator t=2025-06-13T14:56:43.032484409Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T14:56:43.033589485Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.104646ms grafana | logger=migrator t=2025-06-13T14:56:43.043423122Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-13T14:56:43.071393829Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=28.017187ms grafana | logger=migrator t=2025-06-13T14:56:43.075330897Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-13T14:56:43.098447662Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=23.113895ms grafana | logger=migrator t=2025-06-13T14:56:43.102484582Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T14:56:43.103246963Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=762.211µs grafana | logger=migrator t=2025-06-13T14:56:43.108572313Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T14:56:43.109300453Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=727.81µs grafana | logger=migrator t=2025-06-13T14:56:43.114224707Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-13T14:56:43.120430279Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.204962ms grafana | logger=migrator t=2025-06-13T14:56:43.12715586Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-13T14:56:43.134810954Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=7.654214ms grafana | logger=migrator t=2025-06-13T14:56:43.139769487Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:43.140815264Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.050377ms grafana | logger=migrator t=2025-06-13T14:56:43.152918574Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-13T14:56:43.155399151Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=2.479937ms grafana | logger=migrator t=2025-06-13T14:56:43.162207642Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-13T14:56:43.163287608Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.079976ms grafana | logger=migrator t=2025-06-13T14:56:43.167335838Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-13T14:56:43.168397025Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.060277ms grafana | logger=migrator t=2025-06-13T14:56:43.171854616Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T14:56:43.171876886Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=22.96µs grafana | logger=migrator t=2025-06-13T14:56:43.175416349Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:43.181976026Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.559007ms grafana | logger=migrator t=2025-06-13T14:56:43.189271295Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:43.200867108Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=11.595833ms grafana | logger=migrator t=2025-06-13T14:56:43.206203248Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:43.214155486Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=7.951398ms grafana | logger=migrator t=2025-06-13T14:56:43.21845352Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-13T14:56:43.219322963Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=869.103µs grafana | logger=migrator t=2025-06-13T14:56:43.224027513Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-13T14:56:43.225003288Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=974.895µs grafana | logger=migrator t=2025-06-13T14:56:43.230978047Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:43.23724126Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.262783ms grafana | logger=migrator t=2025-06-13T14:56:43.242305806Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:43.25133038Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=9.024924ms grafana | logger=migrator t=2025-06-13T14:56:43.257687264Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-13T14:56:43.258627809Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=942.675µs grafana | logger=migrator t=2025-06-13T14:56:43.264573217Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:43.271808225Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.233988ms grafana | logger=migrator t=2025-06-13T14:56:43.283612341Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:43.294597285Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=10.988724ms grafana | logger=migrator t=2025-06-13T14:56:43.299218583Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:43.299245674Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=28.571µs grafana | logger=migrator t=2025-06-13T14:56:43.303494038Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:43.305304494Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.809917ms grafana | logger=migrator t=2025-06-13T14:56:43.31239363Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-13T14:56:43.313601317Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.207397ms grafana | logger=migrator t=2025-06-13T14:56:43.318265277Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-13T14:56:43.320177396Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.909289ms grafana | logger=migrator t=2025-06-13T14:56:43.324293397Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T14:56:43.324323817Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=37.37µs grafana | logger=migrator t=2025-06-13T14:56:43.331053178Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:56:43.338296296Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.246398ms grafana | logger=migrator t=2025-06-13T14:56:43.344094762Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:56:43.350811993Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.717581ms grafana | logger=migrator t=2025-06-13T14:56:43.354751701Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:56:43.363136276Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=8.379265ms grafana | logger=migrator t=2025-06-13T14:56:43.374235501Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:56:43.386622545Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=12.385574ms grafana | logger=migrator t=2025-06-13T14:56:43.392069087Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-13T14:56:43.399146713Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=7.076586ms grafana | logger=migrator t=2025-06-13T14:56:43.419934262Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:43.420024353Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=91.781µs grafana | logger=migrator t=2025-06-13T14:56:43.427352293Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-13T14:56:43.429053879Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.701506ms grafana | logger=migrator t=2025-06-13T14:56:43.436630701Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-13T14:56:43.447195698Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=10.545647ms grafana | logger=migrator t=2025-06-13T14:56:43.450580329Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-13T14:56:43.450617119Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=36.61µs grafana | logger=migrator t=2025-06-13T14:56:43.458396205Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-13T14:56:43.467214797Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=8.821322ms grafana | logger=migrator t=2025-06-13T14:56:43.472607197Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-13T14:56:43.474574157Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.96566ms grafana | logger=migrator t=2025-06-13T14:56:43.483055333Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-13T14:56:43.49224813Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=9.195067ms grafana | logger=migrator t=2025-06-13T14:56:43.499541069Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-13T14:56:43.500716766Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.174527ms grafana | logger=migrator t=2025-06-13T14:56:43.505597569Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-13T14:56:43.507579319Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.98149ms grafana | logger=migrator t=2025-06-13T14:56:43.512507141Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-13T14:56:43.524622022Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=12.104441ms grafana | logger=migrator t=2025-06-13T14:56:43.559142806Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-13T14:56:43.561014665Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.872009ms grafana | logger=migrator t=2025-06-13T14:56:43.567344759Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-13T14:56:43.569381509Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=2.0359ms grafana | logger=migrator t=2025-06-13T14:56:43.58759308Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-13T14:56:43.588477613Z level=info msg="Migration successfully executed" id="create alert_image table" duration=886.803µs grafana | logger=migrator t=2025-06-13T14:56:43.59427934Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-13T14:56:43.595025841Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=746.361µs grafana | logger=migrator t=2025-06-13T14:56:43.598492723Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-13T14:56:43.598506403Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=13.87µs grafana | logger=migrator t=2025-06-13T14:56:43.604593253Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-13T14:56:43.605313524Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=720.201µs grafana | logger=migrator t=2025-06-13T14:56:43.610172897Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-13T14:56:43.610920538Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=745.8µs grafana | logger=migrator t=2025-06-13T14:56:43.618039194Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-13T14:56:43.61846101Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-13T14:56:43.624294667Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-13T14:56:43.624869365Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=574.198µs grafana | logger=migrator t=2025-06-13T14:56:43.630302156Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-13T14:56:43.631896961Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.594135ms grafana | logger=migrator t=2025-06-13T14:56:43.636196905Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-13T14:56:43.644442337Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.245522ms grafana | logger=migrator t=2025-06-13T14:56:43.649644385Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-13T14:56:43.650715061Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.070206ms grafana | logger=migrator t=2025-06-13T14:56:43.654615859Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-13T14:56:43.655692025Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.075676ms grafana | logger=migrator t=2025-06-13T14:56:43.660146461Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-13T14:56:43.661831357Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.684166ms grafana | logger=migrator t=2025-06-13T14:56:43.674877571Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-13T14:56:43.676962592Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=2.080191ms grafana | logger=migrator t=2025-06-13T14:56:43.683164144Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:43.684405573Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.241769ms grafana | logger=migrator t=2025-06-13T14:56:43.690602225Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-13T14:56:43.690667066Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=65.471µs grafana | logger=migrator t=2025-06-13T14:56:43.69695216Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-13T14:56:43.69698008Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=29.18µs grafana | logger=migrator t=2025-06-13T14:56:43.700700976Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-13T14:56:43.710901658Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.200632ms grafana | logger=migrator t=2025-06-13T14:56:43.719149401Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-13T14:56:43.719684718Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=534.578µs grafana | logger=migrator t=2025-06-13T14:56:43.723355004Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-13T14:56:43.724779324Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.42323ms grafana | logger=migrator t=2025-06-13T14:56:43.729552686Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-13T14:56:43.729992122Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=439.006µs grafana | logger=migrator t=2025-06-13T14:56:43.735662697Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-13T14:56:43.736941995Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.278628ms grafana | logger=migrator t=2025-06-13T14:56:43.746384467Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-13T14:56:43.74801265Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.628073ms grafana | logger=migrator t=2025-06-13T14:56:43.751198869Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-13T14:56:43.78486225Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=33.662591ms grafana | logger=migrator t=2025-06-13T14:56:43.800102947Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-13T14:56:43.811625038Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=11.522391ms grafana | logger=migrator t=2025-06-13T14:56:43.816173027Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-13T14:56:43.816326709Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=153.192µs grafana | logger=migrator t=2025-06-13T14:56:43.819617688Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-13T14:56:43.852429467Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=32.812619ms grafana | logger=migrator t=2025-06-13T14:56:43.859098886Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-13T14:56:43.891668252Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=32.569015ms grafana | logger=migrator t=2025-06-13T14:56:43.895369766Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-13T14:56:43.896385071Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.015165ms grafana | logger=migrator t=2025-06-13T14:56:43.903276564Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-13T14:56:43.90438641Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.109896ms grafana | logger=migrator t=2025-06-13T14:56:43.913445625Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-13T14:56:43.913919843Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=472.968µs grafana | logger=migrator t=2025-06-13T14:56:43.930092853Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-13T14:56:43.931341582Z level=info msg="Migration successfully executed" id="create permission table" duration=1.250279ms grafana | logger=migrator t=2025-06-13T14:56:43.938389067Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-13T14:56:43.940404917Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=2.01887ms grafana | logger=migrator t=2025-06-13T14:56:43.945353421Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-13T14:56:43.946439227Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.085676ms grafana | logger=migrator t=2025-06-13T14:56:43.950157813Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-13T14:56:43.951160738Z level=info msg="Migration successfully executed" id="create role table" duration=1.002445ms grafana | logger=migrator t=2025-06-13T14:56:43.956493907Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-13T14:56:43.96474356Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.248953ms grafana | logger=migrator t=2025-06-13T14:56:43.969258127Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-13T14:56:43.97680843Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.550303ms grafana | logger=migrator t=2025-06-13T14:56:43.979731423Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-13T14:56:43.981705063Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.97303ms grafana | logger=migrator t=2025-06-13T14:56:43.987218855Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-13T14:56:43.988441673Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.222078ms grafana | logger=migrator t=2025-06-13T14:56:43.992427263Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:43.993524759Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.097186ms grafana | logger=migrator t=2025-06-13T14:56:43.996738057Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-13T14:56:43.997762042Z level=info msg="Migration successfully executed" id="create team role table" duration=1.025365ms grafana | logger=migrator t=2025-06-13T14:56:44.004932393Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-13T14:56:44.007107644Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=2.174431ms grafana | logger=migrator t=2025-06-13T14:56:44.013986021Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-13T14:56:44.015524638Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.538767ms grafana | logger=migrator t=2025-06-13T14:56:44.019531525Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-13T14:56:44.020758556Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.226871ms grafana | logger=migrator t=2025-06-13T14:56:44.025846673Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-13T14:56:44.026829119Z level=info msg="Migration successfully executed" id="create user role table" duration=982.816µs grafana | logger=migrator t=2025-06-13T14:56:44.030094614Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-13T14:56:44.032022497Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.926503ms grafana | logger=migrator t=2025-06-13T14:56:44.039574945Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-13T14:56:44.041556118Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.983003ms grafana | logger=migrator t=2025-06-13T14:56:44.05820469Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-13T14:56:44.060271495Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=2.068705ms grafana | logger=migrator t=2025-06-13T14:56:44.067902145Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-13T14:56:44.068802329Z level=info msg="Migration successfully executed" id="create builtin role table" duration=899.874µs grafana | logger=migrator t=2025-06-13T14:56:44.077889203Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-13T14:56:44.07947086Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.581327ms grafana | logger=migrator t=2025-06-13T14:56:44.083995126Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-13T14:56:44.085050155Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.054709ms grafana | logger=migrator t=2025-06-13T14:56:44.089460819Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-13T14:56:44.098169286Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.707777ms grafana | logger=migrator t=2025-06-13T14:56:44.10253073Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-13T14:56:44.103255582Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=724.472µs grafana | logger=migrator t=2025-06-13T14:56:44.107538205Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-13T14:56:44.108378499Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=835.464µs grafana | logger=migrator t=2025-06-13T14:56:44.112050471Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:44.113552217Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.501106ms grafana | logger=migrator t=2025-06-13T14:56:44.118148545Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-13T14:56:44.119206873Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.058178ms grafana | logger=migrator t=2025-06-13T14:56:44.128638832Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-13T14:56:44.129797922Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.15885ms grafana | logger=migrator t=2025-06-13T14:56:44.136260911Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-13T14:56:44.13737052Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.109309ms grafana | logger=migrator t=2025-06-13T14:56:44.141173475Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-13T14:56:44.150719416Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.546561ms grafana | logger=migrator t=2025-06-13T14:56:44.155160241Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-13T14:56:44.163436912Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.275211ms grafana | logger=migrator t=2025-06-13T14:56:44.167355998Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-13T14:56:44.17340095Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.043783ms grafana | logger=migrator t=2025-06-13T14:56:44.179939551Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-13T14:56:44.190232255Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=10.291184ms grafana | logger=migrator t=2025-06-13T14:56:44.197897015Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-13T14:56:44.199044284Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.145669ms grafana | logger=migrator t=2025-06-13T14:56:44.203627682Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-13T14:56:44.205340081Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.71209ms grafana | logger=migrator t=2025-06-13T14:56:44.211409333Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-13T14:56:44.212574123Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.16448ms grafana | logger=migrator t=2025-06-13T14:56:44.21889018Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-13T14:56:44.230532087Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=11.644227ms grafana | logger=migrator t=2025-06-13T14:56:44.234062527Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-13T14:56:44.23486996Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=810.373µs grafana | logger=migrator t=2025-06-13T14:56:44.239969867Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-13T14:56:44.241103825Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.131918ms grafana | logger=migrator t=2025-06-13T14:56:44.2454965Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-13T14:56:44.247534015Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=2.036835ms grafana | logger=migrator t=2025-06-13T14:56:44.252473059Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-13T14:56:44.25437894Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.901581ms grafana | logger=migrator t=2025-06-13T14:56:44.258675394Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-13T14:56:44.258707764Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=34.06µs grafana | logger=migrator t=2025-06-13T14:56:44.264589084Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-13T14:56:44.265752743Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.163499ms grafana | logger=migrator t=2025-06-13T14:56:44.269171011Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-13T14:56:44.269270963Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=56.64µs grafana | logger=migrator t=2025-06-13T14:56:44.272750052Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-13T14:56:44.27326021Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=510.188µs grafana | logger=migrator t=2025-06-13T14:56:44.278173634Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-13T14:56:44.278872165Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=699.651µs grafana | logger=migrator t=2025-06-13T14:56:44.28509168Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-13T14:56:44.285867423Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=775.703µs grafana | logger=migrator t=2025-06-13T14:56:44.289657058Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-13T14:56:44.290133846Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=475.698µs grafana | logger=migrator t=2025-06-13T14:56:44.30867175Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-13T14:56:44.309589345Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=918.385µs grafana | logger=migrator t=2025-06-13T14:56:44.315080028Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-13T14:56:44.316070805Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=990.487µs grafana | logger=migrator t=2025-06-13T14:56:44.319970051Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-13T14:56:44.321262553Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.292212ms grafana | logger=migrator t=2025-06-13T14:56:44.326141746Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-13T14:56:44.338684057Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=12.542032ms grafana | logger=migrator t=2025-06-13T14:56:44.345602205Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-13T14:56:44.345627025Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=26.53µs grafana | logger=migrator t=2025-06-13T14:56:44.351252981Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-13T14:56:44.352797366Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.544025ms grafana | logger=migrator t=2025-06-13T14:56:44.361648816Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-13T14:56:44.363937335Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=2.290489ms grafana | logger=migrator t=2025-06-13T14:56:44.371666386Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-13T14:56:44.373424996Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.76046ms grafana | logger=migrator t=2025-06-13T14:56:44.377365932Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-13T14:56:44.390839581Z level=info msg="Migration successfully executed" id="add correlation config column" duration=13.464938ms grafana | logger=migrator t=2025-06-13T14:56:44.398155455Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:44.399440216Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.285371ms grafana | logger=migrator t=2025-06-13T14:56:44.403113898Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:44.404528042Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.413864ms grafana | logger=migrator t=2025-06-13T14:56:44.409219481Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:56:44.431246794Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.022563ms grafana | logger=migrator t=2025-06-13T14:56:44.436801588Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-13T14:56:44.437665103Z level=info msg="Migration successfully executed" id="create correlation v2" duration=863.015µs grafana | logger=migrator t=2025-06-13T14:56:44.443587774Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:44.444801714Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.21342ms grafana | logger=migrator t=2025-06-13T14:56:44.447826345Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:44.449030196Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.206121ms grafana | logger=migrator t=2025-06-13T14:56:44.456005693Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-13T14:56:44.45754881Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.542927ms grafana | logger=migrator t=2025-06-13T14:56:44.461587098Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:44.461952474Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=364.966µs grafana | logger=migrator t=2025-06-13T14:56:44.4687884Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:56:44.469744536Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=955.826µs grafana | logger=migrator t=2025-06-13T14:56:44.480966286Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-13T14:56:44.49060335Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.637714ms grafana | logger=migrator t=2025-06-13T14:56:44.494006807Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-13T14:56:44.50185073Z level=info msg="Migration successfully executed" id="add type column" duration=7.842673ms grafana | logger=migrator t=2025-06-13T14:56:44.505896078Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-13T14:56:44.50660985Z level=info msg="Migration successfully executed" id="create entity_events table" duration=714.822µs grafana | logger=migrator t=2025-06-13T14:56:44.511090446Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-13T14:56:44.511884239Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=793.483µs grafana | logger=migrator t=2025-06-13T14:56:44.517767389Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:44.518346428Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:44.525344517Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:44.526218162Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:44.535261015Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-13T14:56:44.536957594Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.695289ms grafana | logger=migrator t=2025-06-13T14:56:44.545136642Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-13T14:56:44.546419474Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.283071ms grafana | logger=migrator t=2025-06-13T14:56:44.565805282Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:44.568773512Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.96807ms grafana | logger=migrator t=2025-06-13T14:56:44.584732852Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T14:56:44.587356057Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=2.622365ms grafana | logger=migrator t=2025-06-13T14:56:44.596084464Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:44.597272475Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.188531ms grafana | logger=migrator t=2025-06-13T14:56:44.606100264Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:44.607520848Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.450704ms grafana | logger=migrator t=2025-06-13T14:56:44.616885557Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-13T14:56:44.618090637Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.20336ms grafana | logger=migrator t=2025-06-13T14:56:44.62650498Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-13T14:56:44.627995765Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.489306ms grafana | logger=migrator t=2025-06-13T14:56:44.633785573Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:44.635057604Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.271711ms grafana | logger=migrator t=2025-06-13T14:56:44.639174304Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:44.640404165Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.229341ms grafana | logger=migrator t=2025-06-13T14:56:44.647122548Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-13T14:56:44.64838624Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.263132ms grafana | logger=migrator t=2025-06-13T14:56:44.651871909Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-13T14:56:44.676438205Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.565526ms grafana | logger=migrator t=2025-06-13T14:56:44.689838792Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-13T14:56:44.70094201Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=11.103098ms grafana | logger=migrator t=2025-06-13T14:56:44.706200668Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-13T14:56:44.712457575Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.256417ms grafana | logger=migrator t=2025-06-13T14:56:44.721580369Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-13T14:56:44.721796522Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=215.503µs grafana | logger=migrator t=2025-06-13T14:56:44.727445548Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-13T14:56:44.736495271Z level=info msg="Migration successfully executed" id="add share column" duration=9.045143ms grafana | logger=migrator t=2025-06-13T14:56:44.746242127Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-13T14:56:44.74645562Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=212.024µs grafana | logger=migrator t=2025-06-13T14:56:44.751200741Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-13T14:56:44.752249968Z level=info msg="Migration successfully executed" id="create file table" duration=1.028387ms grafana | logger=migrator t=2025-06-13T14:56:44.756620402Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-13T14:56:44.757726511Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.10584ms grafana | logger=migrator t=2025-06-13T14:56:44.762953649Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-13T14:56:44.764748439Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.79446ms grafana | logger=migrator t=2025-06-13T14:56:44.770030769Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-13T14:56:44.770820913Z level=info msg="Migration successfully executed" id="create file_meta table" duration=789.664µs grafana | logger=migrator t=2025-06-13T14:56:44.779437599Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-13T14:56:44.780252512Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=814.533µs grafana | logger=migrator t=2025-06-13T14:56:44.783735021Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-13T14:56:44.783749961Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=15.31µs grafana | logger=migrator t=2025-06-13T14:56:44.789766883Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-13T14:56:44.789785883Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=20.26µs grafana | logger=migrator t=2025-06-13T14:56:44.7955173Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-13T14:56:44.796378885Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=861.015µs grafana | logger=migrator t=2025-06-13T14:56:44.812875085Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-13T14:56:44.813372953Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=497.958µs grafana | logger=migrator t=2025-06-13T14:56:44.81970798Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-13T14:56:44.821124493Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.416463ms grafana | logger=migrator t=2025-06-13T14:56:44.830385061Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-13T14:56:44.840376319Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.984548ms grafana | logger=migrator t=2025-06-13T14:56:44.844737474Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-13T14:56:44.845012408Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=275.014µs grafana | logger=migrator t=2025-06-13T14:56:44.85512401Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-13T14:56:44.857471179Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.349119ms grafana | logger=migrator t=2025-06-13T14:56:44.863216717Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-13T14:56:44.863621153Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=409.756µs grafana | logger=migrator t=2025-06-13T14:56:44.869832098Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-13T14:56:44.870065052Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=232.994µs grafana | logger=migrator t=2025-06-13T14:56:44.875299861Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-13T14:56:44.876080724Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=780.193µs grafana | logger=migrator t=2025-06-13T14:56:44.883923857Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-13T14:56:44.895886819Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.939802ms grafana | logger=migrator t=2025-06-13T14:56:44.901489004Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-13T14:56:44.911019396Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.525622ms grafana | logger=migrator t=2025-06-13T14:56:44.914306251Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-13T14:56:44.915261937Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=957.966µs grafana | logger=migrator t=2025-06-13T14:56:44.918915289Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-13T14:56:45.00104437Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=82.1235ms grafana | logger=migrator t=2025-06-13T14:56:45.010060772Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-13T14:56:45.011020188Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=964.196µs grafana | logger=migrator t=2025-06-13T14:56:45.020276365Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-13T14:56:45.022255168Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.978273ms grafana | logger=migrator t=2025-06-13T14:56:45.026578232Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-13T14:56:45.058516342Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=31.93567ms grafana | logger=migrator t=2025-06-13T14:56:45.069845074Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-13T14:56:45.076517347Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.673063ms grafana | logger=migrator t=2025-06-13T14:56:45.083440404Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-13T14:56:45.083906612Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=466.788µs grafana | logger=migrator t=2025-06-13T14:56:45.088828365Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-13T14:56:45.08910337Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=277.105µs grafana | logger=migrator t=2025-06-13T14:56:45.094363269Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-13T14:56:45.094745175Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=382.166µs grafana | logger=migrator t=2025-06-13T14:56:45.10443769Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-13T14:56:45.104760525Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=322.425µs grafana | logger=migrator t=2025-06-13T14:56:45.111837384Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-13T14:56:45.11217101Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=333.586µs grafana | logger=migrator t=2025-06-13T14:56:45.121691432Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-13T14:56:45.122942832Z level=info msg="Migration successfully executed" id="create folder table" duration=1.25296ms grafana | logger=migrator t=2025-06-13T14:56:45.12751862Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-13T14:56:45.129360822Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.842202ms grafana | logger=migrator t=2025-06-13T14:56:45.136153526Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-13T14:56:45.13698274Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=829.044µs grafana | logger=migrator t=2025-06-13T14:56:45.143124204Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-13T14:56:45.143144404Z level=info msg="Migration successfully executed" id="Update folder title length" duration=21.32µs grafana | logger=migrator t=2025-06-13T14:56:45.146287798Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-13T14:56:45.148387233Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.096215ms grafana | logger=migrator t=2025-06-13T14:56:45.155100897Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-13T14:56:45.156177955Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.074168ms grafana | logger=migrator t=2025-06-13T14:56:45.164420205Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-13T14:56:45.165546663Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.125988ms grafana | logger=migrator t=2025-06-13T14:56:45.169248197Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-13T14:56:45.169685024Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=436.177µs grafana | logger=migrator t=2025-06-13T14:56:45.173088351Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-13T14:56:45.173347536Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=259.275µs grafana | logger=migrator t=2025-06-13T14:56:45.187365383Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-13T14:56:45.191167038Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=3.802885ms grafana | logger=migrator t=2025-06-13T14:56:45.194917341Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-13T14:56:45.195858897Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=941.316µs grafana | logger=migrator t=2025-06-13T14:56:45.201755176Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-13T14:56:45.202903046Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.14156ms grafana | logger=migrator t=2025-06-13T14:56:45.2090359Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-13T14:56:45.211206677Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.172287ms grafana | logger=migrator t=2025-06-13T14:56:45.215309096Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-13T14:56:45.216543117Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.233721ms grafana | logger=migrator t=2025-06-13T14:56:45.221260057Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-13T14:56:45.222562589Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.301433ms grafana | logger=migrator t=2025-06-13T14:56:45.226423134Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-13T14:56:45.227422832Z level=info msg="Migration successfully executed" id="create anon_device table" duration=996.618µs grafana | logger=migrator t=2025-06-13T14:56:45.234615263Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-13T14:56:45.237324949Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.708806ms grafana | logger=migrator t=2025-06-13T14:56:45.243096386Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-13T14:56:45.244152225Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.055359ms grafana | logger=migrator t=2025-06-13T14:56:45.24800784Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-13T14:56:45.248863734Z level=info msg="Migration successfully executed" id="create signing_key table" duration=855.424µs grafana | logger=migrator t=2025-06-13T14:56:45.259091997Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-13T14:56:45.263019843Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=3.923746ms grafana | logger=migrator t=2025-06-13T14:56:45.267647292Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-13T14:56:45.26988732Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=2.239738ms grafana | logger=migrator t=2025-06-13T14:56:45.274722181Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-13T14:56:45.275378133Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=658.092µs grafana | logger=migrator t=2025-06-13T14:56:45.281605738Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-13T14:56:45.29350669Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.903242ms grafana | logger=migrator t=2025-06-13T14:56:45.330271872Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-13T14:56:45.33131223Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.043108ms grafana | logger=migrator t=2025-06-13T14:56:45.339922666Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-13T14:56:45.339956196Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=36.86µs grafana | logger=migrator t=2025-06-13T14:56:45.348391089Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-13T14:56:45.349767992Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.374373ms grafana | logger=migrator t=2025-06-13T14:56:45.380817907Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-13T14:56:45.380842608Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=26.021µs grafana | logger=migrator t=2025-06-13T14:56:45.392077948Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-13T14:56:45.394251025Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.172477ms grafana | logger=migrator t=2025-06-13T14:56:45.401661281Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-13T14:56:45.403116265Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.454784ms grafana | logger=migrator t=2025-06-13T14:56:45.411213452Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-13T14:56:45.413230486Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.016664ms grafana | logger=migrator t=2025-06-13T14:56:45.419866148Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-13T14:56:45.421845452Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.978684ms grafana | logger=migrator t=2025-06-13T14:56:45.42586909Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-13T14:56:45.426669914Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=801.404µs grafana | logger=migrator t=2025-06-13T14:56:45.432538323Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-13T14:56:45.433012621Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=475.608µs grafana | logger=migrator t=2025-06-13T14:56:45.437969105Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-13T14:56:45.438968072Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=998.127µs grafana | logger=migrator t=2025-06-13T14:56:45.44244332Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-13T14:56:45.443365267Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=918.287µs grafana | logger=migrator t=2025-06-13T14:56:45.456453748Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-13T14:56:45.457493626Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.042248ms grafana | logger=migrator t=2025-06-13T14:56:45.463187662Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-13T14:56:45.474461693Z level=info msg="Migration successfully executed" id="add stack_id column" duration=11.266441ms grafana | logger=migrator t=2025-06-13T14:56:45.480390553Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-13T14:56:45.490108177Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.715294ms grafana | logger=migrator t=2025-06-13T14:56:45.497252819Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-13T14:56:45.51090963Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=13.651501ms grafana | logger=migrator t=2025-06-13T14:56:45.51620895Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-13T14:56:45.52566919Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.458709ms grafana | logger=migrator t=2025-06-13T14:56:45.531228564Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-13T14:56:45.531578989Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=350.295µs grafana | logger=migrator t=2025-06-13T14:56:45.547787394Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-13T14:56:45.549253018Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.487804ms grafana | logger=migrator t=2025-06-13T14:56:45.554925744Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-13T14:56:45.563698844Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=8.77327ms grafana | logger=migrator t=2025-06-13T14:56:45.588201698Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-13T14:56:45.588790698Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=593.22µs grafana | logger=migrator t=2025-06-13T14:56:45.596709452Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-13T14:56:45.597993044Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.283332ms grafana | logger=migrator t=2025-06-13T14:56:45.603303764Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:56:45.628964028Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=25.660844ms grafana | logger=migrator t=2025-06-13T14:56:45.633170289Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-13T14:56:45.633926232Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=755.643µs grafana | logger=migrator t=2025-06-13T14:56:45.640175568Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:45.641074393Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=898.805µs grafana | logger=migrator t=2025-06-13T14:56:45.646928842Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:45.647605854Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=676.322µs grafana | logger=migrator t=2025-06-13T14:56:45.656735998Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:56:45.658392286Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.651878ms grafana | logger=migrator t=2025-06-13T14:56:45.670339039Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T14:56:45.694974135Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=24.635176ms grafana | logger=migrator t=2025-06-13T14:56:45.698468695Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-13T14:56:45.699313549Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=844.324µs grafana | logger=migrator t=2025-06-13T14:56:45.715477563Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-13T14:56:45.718409092Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=2.910379ms grafana | logger=migrator t=2025-06-13T14:56:45.725927589Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-13T14:56:45.726313786Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=385.797µs grafana | logger=migrator t=2025-06-13T14:56:45.729800315Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-13T14:56:45.730744861Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=941.766µs grafana | logger=migrator t=2025-06-13T14:56:45.738501512Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-13T14:56:45.751862198Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=13.360736ms grafana | logger=migrator t=2025-06-13T14:56:45.758192845Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-13T14:56:45.768822286Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=10.62944ms grafana | logger=migrator t=2025-06-13T14:56:45.773475784Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-13T14:56:45.783125078Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=9.648283ms grafana | logger=migrator t=2025-06-13T14:56:45.789950703Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-13T14:56:45.800548922Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=10.597409ms grafana | logger=migrator t=2025-06-13T14:56:45.808173152Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-13T14:56:45.815855442Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=7.68165ms grafana | logger=migrator t=2025-06-13T14:56:45.822228289Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-13T14:56:45.829291758Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=7.058619ms grafana | logger=migrator t=2025-06-13T14:56:45.845241119Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-13T14:56:45.847621619Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=2.38126ms grafana | logger=migrator t=2025-06-13T14:56:45.854721729Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-13T14:56:45.892642251Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=37.915322ms grafana | logger=migrator t=2025-06-13T14:56:45.896419185Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-13T14:56:45.905825385Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=9.4023ms grafana | logger=migrator t=2025-06-13T14:56:45.917443961Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-13T14:56:45.929071788Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=11.627547ms grafana | logger=migrator t=2025-06-13T14:56:45.933973731Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-13T14:56:45.944108982Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=10.133421ms grafana | logger=migrator t=2025-06-13T14:56:45.954649331Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-13T14:56:45.964250834Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=9.587962ms grafana | logger=migrator t=2025-06-13T14:56:45.982720706Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-13T14:56:45.982753196Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=35.9µs grafana | logger=migrator t=2025-06-13T14:56:45.991015496Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-13T14:56:45.991037957Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=24.051µs grafana | logger=migrator t=2025-06-13T14:56:46.001760288Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:46.015295098Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=13.537259ms grafana | logger=migrator t=2025-06-13T14:56:46.030048827Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:46.043293791Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=13.246564ms grafana | logger=migrator t=2025-06-13T14:56:46.053562795Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-13T14:56:46.053939222Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=375.917µs grafana | logger=migrator t=2025-06-13T14:56:46.060027755Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-13T14:56:46.060411701Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=384.116µs grafana | logger=migrator t=2025-06-13T14:56:46.073844289Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:46.088022299Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=14.17704ms grafana | logger=migrator t=2025-06-13T14:56:46.11885194Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:46.129071553Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=10.218413ms grafana | logger=migrator t=2025-06-13T14:56:46.138834759Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-13T14:56:46.146338906Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=7.504877ms grafana | logger=migrator t=2025-06-13T14:56:46.1531195Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-13T14:56:46.163087959Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=9.968329ms grafana | logger=migrator t=2025-06-13T14:56:46.175720603Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-13T14:56:46.176292982Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=574.959µs grafana | logger=migrator t=2025-06-13T14:56:46.182704741Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:46.189845562Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=7.140751ms grafana | logger=migrator t=2025-06-13T14:56:46.197565773Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:46.20450273Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=6.928736ms grafana | logger=migrator t=2025-06-13T14:56:46.218808812Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-13T14:56:46.219188789Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=380.047µs grafana | logger=migrator t=2025-06-13T14:56:46.244908174Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-13T14:56:46.245517414Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=610.78µs grafana | logger=migrator t=2025-06-13T14:56:46.256922487Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-13T14:56:46.258761339Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.838602ms grafana | logger=migrator t=2025-06-13T14:56:46.270191362Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-13T14:56:46.270221403Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=34.201µs grafana | logger=migrator t=2025-06-13T14:56:46.285622703Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-13T14:56:46.285669354Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=47.831µs grafana | logger=migrator t=2025-06-13T14:56:46.294494653Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-13T14:56:46.295195255Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=700.052µs grafana | logger=migrator t=2025-06-13T14:56:46.30557807Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:46.317553094Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=11.975704ms grafana | logger=migrator t=2025-06-13T14:56:46.330732986Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:46.343815978Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=13.083082ms grafana | logger=migrator t=2025-06-13T14:56:46.354029541Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-13T14:56:46.355830561Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=1.80045ms grafana | logger=migrator t=2025-06-13T14:56:46.378610737Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-13T14:56:46.380885675Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=2.275808ms grafana | logger=migrator t=2025-06-13T14:56:46.38589195Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-13T14:56:46.396318917Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=10.426427ms grafana | logger=migrator t=2025-06-13T14:56:46.403174373Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:46.41485171Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=11.676407ms grafana | logger=migrator t=2025-06-13T14:56:46.420537046Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-13T14:56:46.420558467Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-13T14:56:46.420837072Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-13T14:56:46.420852033Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=315.477µs grafana | logger=migrator t=2025-06-13T14:56:46.425864917Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-13T14:56:46.426502328Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=637.141µs grafana | logger=migrator t=2025-06-13T14:56:46.431751656Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-13T14:56:46.4337433Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.991154ms grafana | logger=migrator t=2025-06-13T14:56:46.437925991Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-13T14:56:46.439362415Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.437114ms grafana | logger=migrator t=2025-06-13T14:56:46.445772773Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-13T14:56:46.447005775Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.232572ms grafana | logger=migrator t=2025-06-13T14:56:46.450483084Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-13T14:56:46.451710205Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.223932ms grafana | logger=migrator t=2025-06-13T14:56:46.458013701Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:46.471352987Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=13.340036ms grafana | logger=migrator t=2025-06-13T14:56:46.478088681Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:56:46.487178585Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=9.088724ms grafana | logger=migrator t=2025-06-13T14:56:46.495385404Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-13T14:56:46.507357976Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=11.971772ms grafana | logger=migrator t=2025-06-13T14:56:46.510867046Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-13T14:56:46.520105582Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=9.237486ms grafana | logger=migrator t=2025-06-13T14:56:46.526258236Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-13T14:56:46.526447859Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-13T14:56:46.526461289Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=203.433µs grafana | logger=migrator t=2025-06-13T14:56:46.531656858Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-13T14:56:46.533624351Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.963274ms grafana | logger=migrator t=2025-06-13T14:56:46.538398212Z level=info msg="migrations completed" performed=654 skipped=0 duration=7.497013238s grafana | logger=migrator t=2025-06-13T14:56:46.539084484Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-13T14:56:46.554520434Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-13T14:56:46.554719878Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-13T14:56:46.560595537Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T14:56:46.651079769Z level=info msg="Restored cache from database" duration=747.703µs grafana | logger=resource-migrator t=2025-06-13T14:56:46.662716856Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-13T14:56:46.662738447Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-13T14:56:46.670335945Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-13T14:56:46.671252311Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=915.816µs grafana | logger=resource-migrator t=2025-06-13T14:56:46.679038103Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-13T14:56:46.679126124Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=89.531µs grafana | logger=resource-migrator t=2025-06-13T14:56:46.685513222Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-13T14:56:46.685617944Z level=info msg="Migration successfully executed" id="drop table resource" duration=105.372µs grafana | logger=resource-migrator t=2025-06-13T14:56:46.689412758Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-13T14:56:46.690519497Z level=info msg="Migration successfully executed" id="create table resource" duration=1.10575ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.695049333Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-13T14:56:46.696395326Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.344853ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.701846409Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-13T14:56:46.702092793Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=246.514µs grafana | logger=resource-migrator t=2025-06-13T14:56:46.709732312Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-13T14:56:46.710973413Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.240021ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.716036269Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-13T14:56:46.717726308Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.687299ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.724876118Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-13T14:56:46.727290619Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=2.413391ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.733338981Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-13T14:56:46.733459133Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=123.032µs grafana | logger=resource-migrator t=2025-06-13T14:56:46.748660701Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-13T14:56:46.750337149Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.677358ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.756659816Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-13T14:56:46.759060687Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=2.401721ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.764939956Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-13T14:56:46.765027737Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=88.281µs grafana | logger=resource-migrator t=2025-06-13T14:56:46.769618766Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-13T14:56:46.770750214Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.128609ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.7751764Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-13T14:56:46.7764247Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.24776ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.784552458Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-13T14:56:46.787462217Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=2.906879ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.7934941Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-13T14:56:46.805177508Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=11.682678ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.809293847Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-13T14:56:46.822577692Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=13.284035ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.830991014Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-13T14:56:46.833607769Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=2.616655ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.838820327Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-13T14:56:46.840133369Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.312862ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.843795502Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-13T14:56:46.854684515Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.888733ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.859256413Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-13T14:56:46.869991415Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=10.733662ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.876460294Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-13T14:56:46.876503065Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-13T14:56:46.876987253Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=526.499µs grafana | logger=resource-migrator t=2025-06-13T14:56:46.881293106Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-13T14:56:46.883817119Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=2.523693ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.888991147Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-13T14:56:46.901455257Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=12.464571ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.910625942Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-13T14:56:46.912964762Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=2.33798ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.919078655Z level=info msg="migrations completed" performed=26 skipped=0 duration=248.78428ms grafana | logger=resource-migrator t=2025-06-13T14:56:46.919887789Z level=info msg="Unlocking database" grafana | t=2025-06-13T14:56:46.920181834Z level=info caller=logger.go:214 time=2025-06-13T14:56:46.920162344Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-13T14:56:46.931497196Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-13T14:56:46.966231023Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-13T14:56:46.966254374Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-13T14:56:46.966283484Z level=info msg="Plugins loaded" count=53 duration=34.786948ms grafana | logger=query_data t=2025-06-13T14:56:46.97070909Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-13T14:56:46.97492589Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-13T14:56:46.987083387Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-13T14:56:47.005469308Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-13T14:56:47.005514218Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-13T14:56:47.009886143Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=ngalert.state.manager t=2025-06-13T14:56:47.01030754Z level=info msg="Warming state cache for startup" grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:47.011740453Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=ngalert.multiorg.alertmanager t=2025-06-13T14:56:47.011809466Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=grafanaStorageLogger t=2025-06-13T14:56:47.011938088Z level=info msg="Storage starting" grafana | logger=http.server t=2025-06-13T14:56:47.01386473Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=ngalert.state.manager t=2025-06-13T14:56:47.092919828Z level=info msg="State cache has been initialized" states=0 duration=82.593347ms grafana | logger=ngalert.scheduler t=2025-06-13T14:56:47.093005759Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-13T14:56:47.093423087Z level=info msg=starting first_tick=2025-06-13T14:56:50Z grafana | logger=plugins.update.checker t=2025-06-13T14:56:47.111664345Z level=info msg="Update check succeeded" duration=101.379946ms grafana | logger=grafana.update.checker t=2025-06-13T14:56:47.119083191Z level=info msg="Update check succeeded" duration=108.843943ms grafana | logger=sqlstore.transactions t=2025-06-13T14:56:47.138348247Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T14:56:47.144238617Z level=info msg="Patterns update finished" duration=131.339793ms grafana | logger=provisioning.datasources t=2025-06-13T14:56:47.228083366Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2025-06-13T14:56:47.260482514Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-13T14:56:47.260515705Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-13T14:56:47.262783383Z level=info msg="starting to provision dashboards" grafana | logger=plugin.installer t=2025-06-13T14:56:48.142496404Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.223892982Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.227487312Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.228257006Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.228799045Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.229469876Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.230646786Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.232242503Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.237718276Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.23910257Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-13T14:56:48.294162501Z level=info msg="app registry initialized" grafana | logger=installer.fs t=2025-06-13T14:56:48.296567452Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-13T14:56:48.328859729Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:48.328961971Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=1.317191786s grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:48.329051332Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-13T14:56:48.522641849Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-13T14:56:48.575393152Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-13T14:56:48.591403542Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:48.591421752Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=262.337969ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:48.591437033Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=plugin.installer t=2025-06-13T14:56:48.760581116Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=provisioning.dashboard t=2025-06-13T14:56:48.798372086Z level=info msg="finished to provision dashboards" grafana | logger=installer.fs t=2025-06-13T14:56:48.821343854Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-13T14:56:48.838054357Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:48.838076497Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=246.634564ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:48.838097728Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=plugin.installer t=2025-06-13T14:56:49.027611456Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-13T14:56:49.087966698Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-13T14:56:49.106486731Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:49.106506131Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=268.403763ms grafana | logger=infra.usagestats t=2025-06-13T14:57:26.019490994Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-13 14:56:43,515] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,516] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,519] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,522] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-13 14:56:43,526] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-13 14:56:43,534] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:43,551] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:43,551] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:43,559] INFO Socket connection established, initiating session, client: /172.17.0.5:56728, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:43,596] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x10000023f370000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:43,715] INFO Session: 0x10000023f370000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:43,715] INFO EventThread shut down for session: 0x10000023f370000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-13 14:56:44,344] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-13 14:56:44,609] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-13 14:56:44,685] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-13 14:56:44,687] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-13 14:56:44,687] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-13 14:56:44,699] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 14:56:44,702] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,702] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,702] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,702] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,702] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,702] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,702] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,703] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,703] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,703] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,703] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,703] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,703] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,703] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,703] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,703] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,703] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,703] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,704] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@584f54e6 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 14:56:44,708] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-13 14:56:44,713] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:44,714] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 14:56:44,721] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:44,739] INFO Socket connection established, initiating session, client: /172.17.0.5:56730, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:44,750] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x10000023f370001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 14:56:44,757] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 14:56:45,099] INFO Cluster ID = d-rF8NzzQdGshpvqUU-qrg (kafka.server.KafkaServer) kafka | [2025-06-13 14:56:45,103] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-13 14:56:45,153] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-13 14:56:45,187] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 14:56:45,188] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 14:56:45,190] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 14:56:45,191] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 14:56:45,231] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-13 14:56:45,233] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-13 14:56:45,247] INFO Loaded 0 logs in 16ms. (kafka.log.LogManager) kafka | [2025-06-13 14:56:45,247] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-13 14:56:45,249] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-13 14:56:45,259] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-13 14:56:45,313] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-13 14:56:45,335] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-13 14:56:45,349] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-13 14:56:45,392] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 14:56:45,719] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-13 14:56:45,723] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-13 14:56:45,744] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-13 14:56:45,744] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-13 14:56:45,745] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-13 14:56:45,749] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-13 14:56:45,753] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 14:56:45,770] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:45,772] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:45,773] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:45,776] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:45,793] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-13 14:56:45,817] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-13 14:56:45,850] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1749826605831,1749826605831,1,0,0,72057603688431617,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-13 14:56:45,852] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-13 14:56:45,905] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-13 14:56:45,913] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:45,920] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:45,921] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:45,935] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-13 14:56:45,940] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:45,945] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:45,947] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:56:45,954] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:45,963] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-13 14:56:45,974] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-13 14:56:45,977] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-13 14:56:45,978] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-13 14:56:45,999] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-13 14:56:45,999] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,008] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,013] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,015] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,022] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 14:56:46,043] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,051] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,055] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-13 14:56:46,061] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-13 14:56:46,069] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-13 14:56:46,072] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,073] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,073] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,073] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,078] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,078] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,078] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,079] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-13 14:56:46,079] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-13 14:56:46,080] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,084] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-13 14:56:46,095] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 14:56:46,095] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 14:56:46,096] INFO Kafka startTimeMs: 1749826606090 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 14:56:46,096] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 14:56:46,097] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 14:56:46,098] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-13 14:56:46,103] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 14:56:46,104] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 14:56:46,104] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 14:56:46,105] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 14:56:46,107] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 14:56:46,108] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,110] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-13 14:56:46,129] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,129] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,129] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,130] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,132] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,151] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:46,205] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 14:56:46,209] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 14:56:46,256] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 14:56:51,153] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-13 14:56:51,154] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:13,303] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-13 14:57:13,304] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:13,314] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:13,322] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-13 14:57:13,341] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(4jifs8wHRkq0H0ikcQqZKA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:13,341] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:13,343] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,343] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:57:13,347] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,347] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:57:13,367] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 14:57:13,370] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-13 14:57:13,373] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-13 14:57:13,373] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,373] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:57:13,377] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-13 14:57:13,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(kCb7ZUH-RSyInYvWegYy6A),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:13,387] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:57:13,416] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 14:57:13,421] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-13 14:57:13,421] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-13 14:57:13,538] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 14:57:13,552] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-13 14:57:13,552] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) kafka | [2025-06-13 14:57:13,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:13,555] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:57:13,565] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,566] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,567] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,568] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(4jifs8wHRkq0H0ikcQqZKA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,576] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 14:57:13,582] INFO [Broker id=1] Finished LeaderAndIsr request in 205ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-13 14:57:13,586] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=4jifs8wHRkq0H0ikcQqZKA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 14:57:13,592] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:13,593] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 14:57:13,593] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 14:57:13,597] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 14:57:13,629] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-13 14:57:13,630] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) kafka | [2025-06-13 14:57:13,636] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,637] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,638] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,638] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,638] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,645] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,645] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,645] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,646] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,646] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,653] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,653] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,653] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,653] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,654] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,662] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,663] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,663] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,663] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,663] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,672] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,673] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,673] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,673] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,673] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,683] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,684] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,684] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,684] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,684] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,693] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,695] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,696] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,696] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,696] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,703] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,704] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,705] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,705] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,705] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,717] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,718] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,719] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,719] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,719] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,728] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,729] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,729] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,729] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,729] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,738] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,740] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,740] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,740] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,740] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,751] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,752] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,752] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,752] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,753] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,761] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,762] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,762] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,763] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,763] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,770] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,771] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,772] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,772] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,772] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,779] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,780] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,780] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,780] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,781] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,788] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,790] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,790] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,790] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,790] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,798] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,798] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,799] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,799] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,799] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,807] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,808] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,808] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,808] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,808] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,815] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,816] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,816] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,816] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,817] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,823] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,824] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,824] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,824] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,824] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,833] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,834] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,835] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,835] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,835] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,843] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,844] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,844] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,844] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,844] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,853] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,854] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,854] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,854] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,854] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,861] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,862] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,862] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,863] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,863] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,869] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,870] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,870] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,870] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,871] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,876] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,877] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,877] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,877] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,877] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,884] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,885] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,886] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,886] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,886] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,894] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,895] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,895] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,896] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,896] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,901] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,902] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,902] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,902] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,902] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,910] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,911] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,911] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,911] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,911] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,917] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,918] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,918] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,918] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,918] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,927] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,928] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,928] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,929] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,929] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,937] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,938] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,939] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,939] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,939] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,946] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,946] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,947] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,947] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,947] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,954] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,955] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,955] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,955] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,956] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,967] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,968] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,968] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,968] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,968] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,978] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,978] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,979] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,979] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,979] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,986] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,987] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,987] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,987] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,987] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:13,995] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:13,995] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:13,996] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,996] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:13,996] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:14,002] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:14,002] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:14,003] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,003] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,003] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:14,010] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:14,010] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:14,010] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,010] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,010] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:14,019] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:14,020] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:14,020] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,020] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,020] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:14,028] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:14,028] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:14,028] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,029] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,029] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:14,039] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:14,039] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:14,040] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,040] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,040] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:14,048] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:14,049] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:14,049] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,049] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,050] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:14,056] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:14,057] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:14,057] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,057] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,057] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:14,066] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:14,067] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:14,067] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,067] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,067] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:14,074] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:14,075] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:14,075] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,075] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,075] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:14,083] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:14,084] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:14,084] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,084] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,084] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:14,091] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:14,091] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 14:57:14,091] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,091] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:14,092] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 14:57:14,096] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,098] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,099] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,099] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,099] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,102] INFO [Broker id=1] Finished LeaderAndIsr request in 505ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2025-06-13 14:57:14,103] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=kCb7ZUH-RSyInYvWegYy6A, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 14:57:14,105] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,110] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,110] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,110] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,110] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 14:57:14,110] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 11 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,111] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 14:57:14,111] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,113] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,113] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,114] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,114] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,115] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,115] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,115] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,115] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,116] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,116] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,116] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,118] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,119] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 19 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,119] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,119] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 21 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 14:57:14,169] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,188] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,208] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 044ad9e7-7f73-4e67-ada5-d3c6274784bc in Empty state. Created a new member id consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:14,213] INFO [GroupCoordinator 1]: Preparing to rebalance group 044ad9e7-7f73-4e67-ada5-d3c6274784bc in state PreparingRebalance with old generation 0 (__consumer_offsets-5) (reason: Adding new member consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f with group instance id None; client reason: need to re-join with the given member-id: consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:15,130] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group bcceede6-cf80-4e3b-b200-9e273dce58d5 in Empty state. Created a new member id consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:15,134] INFO [GroupCoordinator 1]: Preparing to rebalance group bcceede6-cf80-4e3b-b200-9e273dce58d5 in state PreparingRebalance with old generation 0 (__consumer_offsets-0) (reason: Adding new member consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11 with group instance id None; client reason: need to re-join with the given member-id: consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:17,200] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:17,216] INFO [GroupCoordinator 1]: Stabilized group 044ad9e7-7f73-4e67-ada5-d3c6274784bc generation 1 (__consumer_offsets-5) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:17,225] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:17,225] INFO [GroupCoordinator 1]: Assignment received from leader consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f for group 044ad9e7-7f73-4e67-ada5-d3c6274784bc for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:18,135] INFO [GroupCoordinator 1]: Stabilized group bcceede6-cf80-4e3b-b200-9e273dce58d5 generation 1 (__consumer_offsets-0) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:18,150] INFO [GroupCoordinator 1]: Assignment received from leader consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11 for group bcceede6-cf80-4e3b-b200-9e273dce58d5 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:57:20,071] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-13 14:57:20,085] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(fqYupnA9Qemly06nHPEaTw),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:20,085] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) kafka | [2025-06-13 14:57:20,085] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 14:57:20,085] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:57:20,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 14:57:20,085] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:57:20,097] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 14:57:20,097] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) kafka | [2025-06-13 14:57:20,097] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-13 14:57:20,098] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-13 14:57:20,098] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 14:57:20,098] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 14:57:20,099] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-13 14:57:20,099] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 5 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 14:57:20,100] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-13 14:57:20,100] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-13 14:57:20,100] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 5 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-13 14:57:20,103] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 14:57:20,103] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-13 14:57:20,104] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:20,104] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 14:57:20,104] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(fqYupnA9Qemly06nHPEaTw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 14:57:20,108] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 5 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-13 14:57:20,108] INFO [Broker id=1] Finished LeaderAndIsr request in 9ms correlationId 5 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-13 14:57:20,109] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=fqYupnA9Qemly06nHPEaTw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 5 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 14:57:20,110] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) kafka | [2025-06-13 14:57:20,110] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) kafka | [2025-06-13 14:57:20,111] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 6 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 14:58:50,717] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-c926bd1f-9cd8-41a3-b657-e55b27f99de9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:58:50,719] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-c926bd1f-9cd8-41a3-b657-e55b27f99de9 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:58:53,720] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:58:53,723] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-c926bd1f-9cd8-41a3-b657-e55b27f99de9 for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:58:53,847] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-c926bd1f-9cd8-41a3-b657-e55b27f99de9 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:58:53,848] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 14:58:53,850] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-c926bd1f-9cd8-41a3-b657-e55b27f99de9, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-13T14:56:53.336+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-13T14:56:53.432+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 32 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-13T14:56:53.433+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-13T14:56:54.851+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-13T14:56:55.025+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 162 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-13T14:56:55.671+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-13T14:56:55.685+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-13T14:56:55.687+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-13T14:56:55.688+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-13T14:56:55.730+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-13T14:56:55.731+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2240 ms policy-api | [2025-06-13T14:56:56.047+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-13T14:56:56.139+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-13T14:56:56.192+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-13T14:56:56.560+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-13T14:56:56.601+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-13T14:56:56.795+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@239d9cb7 policy-api | [2025-06-13T14:56:56.798+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-13T14:56:56.876+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-13T14:56:58.862+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-13T14:56:58.865+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-13T14:56:59.486+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-13T14:57:00.341+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-13T14:57:01.397+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-13T14:57:01.442+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-13T14:57:02.058+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-13T14:57:02.195+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-13T14:57:02.214+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-13T14:57:02.237+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.55 seconds (process running for 10.142) policy-api | [2025-06-13T14:57:39.916+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-13T14:57:39.917+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-13T14:57:39.918+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2025-06-13T14:58:26.298+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-4] ***** OrderedServiceImpl implementers: policy-api | [] policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | MakeTopics :: Creates the Policy topics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteXacmlPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | policy-csit | 4 tests, 4 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | policy-csit | 6 tests, 6 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.2) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.18976 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.235723 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.288446 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.346232 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.402747 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.453333 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.505238 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.552547 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.609198 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.6593 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.710297 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.757171 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.807078 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.859371 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.910159 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.964163 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.013874 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.065374 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.122799 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.178803 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.226544 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.277946 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.32896 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.379188 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.428567 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.486557 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.539177 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.592826 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.645962 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.698371 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.757659 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.806122 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.859425 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.920719 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.982926 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.037888 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.105255 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.165672 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.231306 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.295163 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.367422 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.424178 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.509181 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.566089 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.634884 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.688548 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.743224 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:42.072799 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:42.465725 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:42.713722 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:42.989643 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.04045 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.089425 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.142743 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.203577 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.251598 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.306785 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.357922 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.406305 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.465583 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.521667 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.57892 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.637727 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.700012 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.759518 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.823759 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.877346 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.932977 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.979422 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.03203 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.093639 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.154862 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.217875 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.26724 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.328594 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.385515 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.438286 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.484409 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.53775 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.589197 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.637113 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.684419 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.743862 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.793522 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.842049 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.898488 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.952905 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.999682 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.050318 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.108885 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.155557 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.206137 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.251001 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.301566 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.383107 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.443344 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.508293 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.565212 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.61694 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.67316 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.744069 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.798014 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.854932 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.936889 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.990733 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:46.059254 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:46.131309 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:46.190798 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:46.248394 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.30331 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.357894 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.422851 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.476869 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.526239 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.581657 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.64603 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.698133 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.745466 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1306251456391100u | 1 | 2025-06-13 14:56:46.789182 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1306251456391200u | 1 | 2025-06-13 14:56:46.836778 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1306251456391200u | 1 | 2025-06-13 14:56:46.897075 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1306251456391200u | 1 | 2025-06-13 14:56:46.955134 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1306251456391200u | 1 | 2025-06-13 14:56:47.022035 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1306251456391300u | 1 | 2025-06-13 14:56:47.065072 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1306251456391300u | 1 | 2025-06-13 14:56:47.110425 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1306251456391300u | 1 | 2025-06-13 14:56:47.157675 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.420588 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.502112 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.55626 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.608803 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.666759 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.720684 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.772399 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.820544 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.868675 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.931443 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.986197 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:49.038842 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:49.09263 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.14466 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.196379 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.25297 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.302315 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.352949 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.409067 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.457724 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.501656 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1306251456481600u | 1 | 2025-06-13 14:56:49.549514 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1306251456481600u | 1 | 2025-06-13 14:56:49.596024 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1306251456481601u | 1 | 2025-06-13 14:56:49.640674 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1306251456481601u | 1 | 2025-06-13 14:56:49.695805 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1306251456481700u | 1 | 2025-06-13 14:56:49.741225 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1306251456481700u | 1 | 2025-06-13 14:56:49.807165 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1306251456481700u | 1 | 2025-06-13 14:56:49.861697 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:49.914433 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:49.973585 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.023946 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.076834 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.126906 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.180079 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.241525 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.290505 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.332297 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1306251456501600u | 1 | 2025-06-13 14:56:50.977018 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1306251456511600u | 1 | 2025-06-13 14:56:51.600731 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1306251456511600u | 1 | 2025-06-13 14:56:51.662828 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.7:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.5:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-13T14:57:04.008+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 54 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-13T14:57:04.009+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-13T14:57:05.377+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-13T14:57:05.468+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 78 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-13T14:57:06.421+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-13T14:57:06.435+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-13T14:57:06.437+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-13T14:57:06.437+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-13T14:57:06.490+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-13T14:57:06.490+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2425 ms policy-pap | [2025-06-13T14:57:06.929+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-13T14:57:07.010+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-13T14:57:07.054+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-13T14:57:07.481+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-13T14:57:07.523+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-13T14:57:07.729+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6e337ba1 policy-pap | [2025-06-13T14:57:07.731+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-13T14:57:07.823+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-13T14:57:09.740+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-13T14:57:09.744+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-13T14:57:10.931+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 044ad9e7-7f73-4e67-ada5-d3c6274784bc policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T14:57:10.985+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:57:11.122+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:57:11.122+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:57:11.122+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826631120 policy-pap | [2025-06-13T14:57:11.124+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-1, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T14:57:11.125+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T14:57:11.125+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:57:11.133+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:57:11.133+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:57:11.133+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826631133 policy-pap | [2025-06-13T14:57:11.133+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T14:57:11.481+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=xacml, supportedPolicyTypes=[onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0, onap.policies.monitoring.* 1.0.0, onap.policies.optimization.* 1.0.0, onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0, onap.policies.native.Xacml 1.0.0, onap.policies.Naming 1.0.0, onap.policies.match.* 1.0.0], policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-13T14:57:11.601+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-13T14:57:11.677+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-13T14:57:11.887+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-13T14:57:12.610+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-13T14:57:12.732+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-13T14:57:12.753+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-13T14:57:12.775+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-13T14:57:12.775+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-13T14:57:12.775+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-13T14:57:12.776+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-13T14:57:12.776+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-13T14:57:12.776+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-13T14:57:12.776+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-13T14:57:12.778+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=044ad9e7-7f73-4e67-ada5-d3c6274784bc, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2096ade6 policy-pap | [2025-06-13T14:57:12.788+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=044ad9e7-7f73-4e67-ada5-d3c6274784bc, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T14:57:12.788+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 044ad9e7-7f73-4e67-ada5-d3c6274784bc policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T14:57:12.789+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:57:12.795+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:57:12.795+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:57:12.795+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826632795 policy-pap | [2025-06-13T14:57:12.796+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T14:57:12.796+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-13T14:57:12.796+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7f2861c0-7dab-4ee1-a7da-eaad47fd4b7e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@687fa4d0 policy-pap | [2025-06-13T14:57:12.796+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7f2861c0-7dab-4ee1-a7da-eaad47fd4b7e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T14:57:12.797+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T14:57:12.797+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:57:12.802+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:57:12.802+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:57:12.802+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826632802 policy-pap | [2025-06-13T14:57:12.803+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T14:57:12.803+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-13T14:57:12.803+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7f2861c0-7dab-4ee1-a7da-eaad47fd4b7e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T14:57:12.803+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=044ad9e7-7f73-4e67-ada5-d3c6274784bc, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T14:57:12.803+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4bb38c68-3eb8-4da3-9a8e-86958093f792, alive=false, publisher=null]]: starting policy-pap | [2025-06-13T14:57:12.814+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-13T14:57:12.815+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:57:12.825+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-13T14:57:12.839+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:57:12.839+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:57:12.839+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826632839 policy-pap | [2025-06-13T14:57:12.839+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4bb38c68-3eb8-4da3-9a8e-86958093f792, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-13T14:57:12.839+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=792f5b7e-8121-456f-8173-aac7159e2ce8, alive=false, publisher=null]]: starting policy-pap | [2025-06-13T14:57:12.840+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-13T14:57:12.840+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T14:57:12.840+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-13T14:57:12.846+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T14:57:12.846+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T14:57:12.846+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826632846 policy-pap | [2025-06-13T14:57:12.846+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=792f5b7e-8121-456f-8173-aac7159e2ce8, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-13T14:57:12.846+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-13T14:57:12.846+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-13T14:57:12.847+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-13T14:57:12.847+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-13T14:57:12.850+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-13T14:57:12.851+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-13T14:57:12.851+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-13T14:57:12.851+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-13T14:57:12.852+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-13T14:57:12.855+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-13T14:57:12.856+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.597 seconds (process running for 10.201) policy-pap | [2025-06-13T14:57:12.856+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-13T14:57:13.292+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-13T14:57:13.294+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: d-rF8NzzQdGshpvqUU-qrg policy-pap | [2025-06-13T14:57:13.294+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Cluster ID: d-rF8NzzQdGshpvqUU-qrg policy-pap | [2025-06-13T14:57:13.295+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: d-rF8NzzQdGshpvqUU-qrg policy-pap | [2025-06-13T14:57:13.321+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-13T14:57:13.321+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-13T14:57:13.340+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:57:13.340+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: d-rF8NzzQdGshpvqUU-qrg policy-pap | [2025-06-13T14:57:13.462+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:57:13.495+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:57:14.138+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-13T14:57:14.144+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-13T14:57:14.177+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf policy-pap | [2025-06-13T14:57:14.177+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-13T14:57:14.199+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-13T14:57:14.201+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] (Re-)joining group policy-pap | [2025-06-13T14:57:14.211+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Request joining group due to: need to re-join with the given member-id: consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f policy-pap | [2025-06-13T14:57:14.211+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] (Re-)joining group policy-pap | [2025-06-13T14:57:17.203+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf', protocol='range'} policy-pap | [2025-06-13T14:57:17.213+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-13T14:57:17.218+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Successfully joined group with generation Generation{generationId=1, memberId='consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f', protocol='range'} policy-pap | [2025-06-13T14:57:17.219+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Finished assignment for group at generation 1: {consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-13T14:57:17.239+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf', protocol='range'} policy-pap | [2025-06-13T14:57:17.240+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-13T14:57:17.242+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-13T14:57:17.243+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Successfully synced group in generation Generation{generationId=1, memberId='consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f', protocol='range'} policy-pap | [2025-06-13T14:57:17.243+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-13T14:57:17.243+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-13T14:57:17.255+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-13T14:57:17.255+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-13T14:57:17.273+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-13T14:57:17.273+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-13T14:57:19.253+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-13T14:57:19.254+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"e18e1fff-9deb-4367-a557-a7dc64389e1f","timestampMs":1749826634765,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e"} policy-pap | [2025-06-13T14:57:19.254+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"e18e1fff-9deb-4367-a557-a7dc64389e1f","timestampMs":1749826634765,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e"} policy-pap | [2025-06-13T14:57:19.257+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_TOPIC_CHECK policy-pap | [2025-06-13T14:57:19.257+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK policy-pap | [2025-06-13T14:57:19.274+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"8b7db761-4d49-42ed-9835-fab8afcf3c0a","timestampMs":1749826639257,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup"} policy-pap | [2025-06-13T14:57:19.279+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-13T14:57:19.287+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"8b7db761-4d49-42ed-9835-fab8afcf3c0a","timestampMs":1749826639257,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup"} policy-pap | [2025-06-13T14:57:19.869+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting policy-pap | [2025-06-13T14:57:19.869+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting listener policy-pap | [2025-06-13T14:57:19.869+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting timer policy-pap | [2025-06-13T14:57:19.870+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=243683ae-56ab-4597-926a-fcce27e0e31d, expireMs=1749826669870] policy-pap | [2025-06-13T14:57:19.871+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=243683ae-56ab-4597-926a-fcce27e0e31d, expireMs=1749826669870] policy-pap | [2025-06-13T14:57:19.871+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting enqueue policy-pap | [2025-06-13T14:57:19.871+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate started policy-pap | [2025-06-13T14:57:19.877+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"243683ae-56ab-4597-926a-fcce27e0e31d","timestampMs":1749826639850,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:19.923+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"243683ae-56ab-4597-926a-fcce27e0e31d","timestampMs":1749826639850,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:19.923+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T14:57:19.927+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"243683ae-56ab-4597-926a-fcce27e0e31d","timestampMs":1749826639850,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:19.929+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T14:57:20.044+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"243683ae-56ab-4597-926a-fcce27e0e31d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"587a891a-49c5-4bf1-8169-985183639997","timestampMs":1749826640030,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.044+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"243683ae-56ab-4597-926a-fcce27e0e31d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"587a891a-49c5-4bf1-8169-985183639997","timestampMs":1749826640030,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.045+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 243683ae-56ab-4597-926a-fcce27e0e31d policy-pap | [2025-06-13T14:57:20.045+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping policy-pap | [2025-06-13T14:57:20.046+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping enqueue policy-pap | [2025-06-13T14:57:20.046+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping timer policy-pap | [2025-06-13T14:57:20.046+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=243683ae-56ab-4597-926a-fcce27e0e31d, expireMs=1749826669870] policy-pap | [2025-06-13T14:57:20.046+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping listener policy-pap | [2025-06-13T14:57:20.046+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopped policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate successful policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e start publishing next request policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange starting policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange starting listener policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange starting timer policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"55ff5c69-c399-49d4-a95f-d4c543d908a0","timestampMs":1749826640037,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=d49dcaf6-23f5-41e2-86f9-c004bd57c4bb, expireMs=1749826670059] policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange starting enqueue policy-pap | [2025-06-13T14:57:20.060+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange started policy-pap | [2025-06-13T14:57:20.060+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=d49dcaf6-23f5-41e2-86f9-c004bd57c4bb, expireMs=1749826670059] policy-pap | [2025-06-13T14:57:20.060+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.Naming","policy-type-version":"1.0.0","policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-13T14:57:20.060+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","timestampMs":1749826639851,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.082+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T14:57:20.384+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"55ff5c69-c399-49d4-a95f-d4c543d908a0","timestampMs":1749826640037,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.385+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-13T14:57:20.388+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","timestampMs":1749826639851,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.389+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-13T14:57:20.391+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"72e98077-ce68-4257-9fdb-7e7ad741339a","timestampMs":1749826640073,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange stopping policy-pap | [2025-06-13T14:57:20.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange stopping enqueue policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange stopping timer policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=d49dcaf6-23f5-41e2-86f9-c004bd57c4bb, expireMs=1749826670059] policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange stopping listener policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange stopped policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange successful policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e start publishing next request policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting listener policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting timer policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=63c49f14-f1a0-4743-8e01-8dc98e4cfb41, expireMs=1749826670625] policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting enqueue policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate started policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","timestampMs":1749826640376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.630+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","timestampMs":1749826639851,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.630+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-13T14:57:20.635+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"72e98077-ce68-4257-9fdb-7e7ad741339a","timestampMs":1749826640073,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.635+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d49dcaf6-23f5-41e2-86f9-c004bd57c4bb policy-pap | [2025-06-13T14:57:20.639+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","timestampMs":1749826640376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.639+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T14:57:20.638+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","timestampMs":1749826640376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.640+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T14:57:20.651+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"93d49d05-9ee7-4d6b-9028-491a1ccee074","timestampMs":1749826640639,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.652+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 63c49f14-f1a0-4743-8e01-8dc98e4cfb41 policy-pap | [2025-06-13T14:57:20.658+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"93d49d05-9ee7-4d6b-9028-491a1ccee074","timestampMs":1749826640639,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:57:20.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping policy-pap | [2025-06-13T14:57:20.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping enqueue policy-pap | [2025-06-13T14:57:20.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping timer policy-pap | [2025-06-13T14:57:20.659+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=63c49f14-f1a0-4743-8e01-8dc98e4cfb41, expireMs=1749826670625] policy-pap | [2025-06-13T14:57:20.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping listener policy-pap | [2025-06-13T14:57:20.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopped policy-pap | [2025-06-13T14:57:20.664+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate successful policy-pap | [2025-06-13T14:57:20.664+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e has no more requests policy-pap | [2025-06-13T14:57:41.622+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-13T14:57:41.622+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-13T14:57:41.625+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms policy-pap | [2025-06-13T14:57:49.870+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=243683ae-56ab-4597-926a-fcce27e0e31d, expireMs=1749826669870] policy-pap | [2025-06-13T14:57:50.059+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=d49dcaf6-23f5-41e2-86f9-c004bd57c4bb, expireMs=1749826670059] policy-pap | [2025-06-13T14:58:29.575+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group defaultGroup policy-pap | [2025-06-13T14:58:29.576+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy onap.restart.tca 1.0.0 to subgroup defaultGroup xacml count=2 policy-pap | [2025-06-13T14:58:29.577+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-13T14:58:29.578+00:00|INFO|SessionData|http-nio-6969-exec-3] add update xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e defaultGroup xacml policies=1 policy-pap | [2025-06-13T14:58:29.578+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group defaultGroup policy-pap | [2025-06-13T14:58:29.625+00:00|INFO|SessionData|http-nio-6969-exec-3] use cached group defaultGroup policy-pap | [2025-06-13T14:58:29.625+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy OSDF_CASABLANCA.Affinity_Default 1.0.0 to subgroup defaultGroup xacml count=3 policy-pap | [2025-06-13T14:58:29.625+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy OSDF_CASABLANCA.Affinity_Default 1.0.0 policy-pap | [2025-06-13T14:58:29.625+00:00|INFO|SessionData|http-nio-6969-exec-3] add update xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e defaultGroup xacml policies=2 policy-pap | [2025-06-13T14:58:29.625+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group defaultGroup policy-pap | [2025-06-13T14:58:29.626+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group defaultGroup policy-pap | [2025-06-13T14:58:29.644+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-13T14:58:29Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=OSDF_CASABLANCA.Affinity_Default 1.0.0, action=DEPLOYMENT, timestamp=2025-06-13T14:58:29Z, user=policyadmin)] policy-pap | [2025-06-13T14:58:29.674+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting policy-pap | [2025-06-13T14:58:29.674+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting listener policy-pap | [2025-06-13T14:58:29.674+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting timer policy-pap | [2025-06-13T14:58:29.674+00:00|INFO|TimerManager|http-nio-6969-exec-3] update timer registered Timer [name=6a5c2c9f-6c22-44fe-904b-515d314bb708, expireMs=1749826739674] policy-pap | [2025-06-13T14:58:29.674+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting enqueue policy-pap | [2025-06-13T14:58:29.674+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate started policy-pap | [2025-06-13T14:58:29.675+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=6a5c2c9f-6c22-44fe-904b-515d314bb708, expireMs=1749826739674] policy-pap | [2025-06-13T14:58:29.675+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6a5c2c9f-6c22-44fe-904b-515d314bb708","timestampMs":1749826709625,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:58:29.685+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6a5c2c9f-6c22-44fe-904b-515d314bb708","timestampMs":1749826709625,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:58:29.685+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T14:58:29.686+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6a5c2c9f-6c22-44fe-904b-515d314bb708","timestampMs":1749826709625,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:58:29.686+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T14:58:30.211+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"6a5c2c9f-6c22-44fe-904b-515d314bb708","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"800a49c0-b071-44e7-8819-4105949c61d2","timestampMs":1749826710206,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:58:30.212+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6a5c2c9f-6c22-44fe-904b-515d314bb708 policy-pap | [2025-06-13T14:58:30.219+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"6a5c2c9f-6c22-44fe-904b-515d314bb708","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"800a49c0-b071-44e7-8819-4105949c61d2","timestampMs":1749826710206,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:58:30.220+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping policy-pap | [2025-06-13T14:58:30.220+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping enqueue policy-pap | [2025-06-13T14:58:30.220+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping timer policy-pap | [2025-06-13T14:58:30.220+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=6a5c2c9f-6c22-44fe-904b-515d314bb708, expireMs=1749826739674] policy-pap | [2025-06-13T14:58:30.220+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping listener policy-pap | [2025-06-13T14:58:30.220+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopped policy-pap | [2025-06-13T14:58:30.228+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate successful policy-pap | [2025-06-13T14:58:30.229+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e has no more requests policy-pap | [2025-06-13T14:58:30.229+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0},{"policy-type":"onap.policies.optimization.resource.AffinityPolicy","policy-type-version":"1.0.0","policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-13T14:58:54.336+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup policy-pap | [2025-06-13T14:58:54.338+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup defaultGroup xacml count=2 policy-pap | [2025-06-13T14:58:54.338+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-13T14:58:54.338+00:00|INFO|SessionData|http-nio-6969-exec-5] add update xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e defaultGroup xacml policies=0 policy-pap | [2025-06-13T14:58:54.338+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group defaultGroup policy-pap | [2025-06-13T14:58:54.338+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group defaultGroup policy-pap | [2025-06-13T14:58:54.352+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-13T14:58:54Z, user=policyadmin)] policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting listener policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting timer policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|TimerManager|http-nio-6969-exec-5] update timer registered Timer [name=cb526d69-01bf-4ec2-b43b-e5796b06e4c5, expireMs=1749826764365] policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting enqueue policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate started policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","timestampMs":1749826734338,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:58:54.374+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","timestampMs":1749826734338,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:58:54.374+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T14:58:54.374+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","timestampMs":1749826734338,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:58:54.374+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T14:58:54.382+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"fe9b2d1d-c5b0-4dd8-9c19-c42c7ad985ee","timestampMs":1749826734376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:58:54.382+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping policy-pap | [2025-06-13T14:58:54.382+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping enqueue policy-pap | [2025-06-13T14:58:54.382+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping timer policy-pap | [2025-06-13T14:58:54.382+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=cb526d69-01bf-4ec2-b43b-e5796b06e4c5, expireMs=1749826764365] policy-pap | [2025-06-13T14:58:54.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping listener policy-pap | [2025-06-13T14:58:54.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopped policy-pap | [2025-06-13T14:58:54.384+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"fe9b2d1d-c5b0-4dd8-9c19-c42c7ad985ee","timestampMs":1749826734376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:58:54.385+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id cb526d69-01bf-4ec2-b43b-e5796b06e4c5 policy-pap | [2025-06-13T14:58:54.400+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate successful policy-pap | [2025-06-13T14:58:54.400+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e has no more requests policy-pap | [2025-06-13T14:58:54.401+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-13T14:58:59.675+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=6a5c2c9f-6c22-44fe-904b-515d314bb708, expireMs=1749826739674] policy-pap | [2025-06-13T14:59:12.852+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-13T14:59:20.060+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"eefa5f0e-984c-486a-a008-71aa56b4235b","timestampMs":1749826760051,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:59:20.061+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"eefa5f0e-984c-486a-a008-71aa56b4235b","timestampMs":1749826760051,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-13T14:59:20.062+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-xacml-pdp | Waiting for pap port 6969... policy-xacml-pdp | pap (172.17.0.9:6969) open policy-xacml-pdp | Waiting for kafka port 9092... policy-xacml-pdp | kafka (172.17.0.5:9092) open policy-xacml-pdp | + KEYSTORE=/opt/app/policy/pdpx/etc/ssl/policy-keystore policy-xacml-pdp | + TRUSTSTORE=/opt/app/policy/pdpx/etc/ssl/policy-truststore policy-xacml-pdp | + KEYSTORE_PASSWD=Pol1cy_0nap policy-xacml-pdp | + TRUSTSTORE_PASSWD=Pol1cy_0nap policy-xacml-pdp | + '[' 0 -ge 1 ] policy-xacml-pdp | + CONFIG_FILE= policy-xacml-pdp | + '[' -z ] policy-xacml-pdp | + CONFIG_FILE=/opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-truststore ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-keystore ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/xacml.properties ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/logback.xml ] policy-xacml-pdp | + echo 'Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json' policy-xacml-pdp | Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | + /usr/lib/jvm/default-jvm/bin/java -cp '/opt/app/policy/pdpx/etc:/opt/app/policy/pdpx/lib/*' '-Dlogback.configurationFile=/opt/app/policy/pdpx/etc/logback.xml' '-Djavax.net.ssl.keyStore=/opt/app/policy/pdpx/etc/ssl/policy-keystore' '-Djavax.net.ssl.keyStorePassword=Pol1cy_0nap' '-Djavax.net.ssl.trustStore=/opt/app/policy/pdpx/etc/ssl/policy-truststore' '-Djavax.net.ssl.trustStorePassword=Pol1cy_0nap' org.onap.policy.pdpx.main.startstop.Main -c /opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | [2025-06-13T14:57:13.976+00:00|INFO|Main|main] Starting policy xacml pdp service with arguments - [-c, /opt/app/policy/pdpx/etc/defaultConfig.json] policy-xacml-pdp | [2025-06-13T14:57:14.108+00:00|INFO|XacmlPdpActivator|main] Activator initializing using org.onap.policy.pdpx.main.parameters.XacmlPdpParameterGroup@37858383 policy-xacml-pdp | [2025-06-13T14:57:14.164+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-xacml-pdp | allow.auto.create.topics = true policy-xacml-pdp | auto.commit.interval.ms = 5000 policy-xacml-pdp | auto.include.jmx.reporter = true policy-xacml-pdp | auto.offset.reset = latest policy-xacml-pdp | bootstrap.servers = [kafka:9092] policy-xacml-pdp | check.crcs = true policy-xacml-pdp | client.dns.lookup = use_all_dns_ips policy-xacml-pdp | client.id = consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-1 policy-xacml-pdp | client.rack = policy-xacml-pdp | connections.max.idle.ms = 540000 policy-xacml-pdp | default.api.timeout.ms = 60000 policy-xacml-pdp | enable.auto.commit = true policy-xacml-pdp | enable.metrics.push = true policy-xacml-pdp | exclude.internal.topics = true policy-xacml-pdp | fetch.max.bytes = 52428800 policy-xacml-pdp | fetch.max.wait.ms = 500 policy-xacml-pdp | fetch.min.bytes = 1 policy-xacml-pdp | group.id = bcceede6-cf80-4e3b-b200-9e273dce58d5 policy-xacml-pdp | group.instance.id = null policy-xacml-pdp | group.protocol = classic policy-xacml-pdp | group.remote.assignor = null policy-xacml-pdp | heartbeat.interval.ms = 3000 policy-xacml-pdp | interceptor.classes = [] policy-xacml-pdp | internal.leave.group.on.close = true policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-xacml-pdp | isolation.level = read_uncommitted policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | max.partition.fetch.bytes = 1048576 policy-xacml-pdp | max.poll.interval.ms = 300000 policy-xacml-pdp | max.poll.records = 500 policy-xacml-pdp | metadata.max.age.ms = 300000 policy-xacml-pdp | metadata.recovery.strategy = none policy-xacml-pdp | metric.reporters = [] policy-xacml-pdp | metrics.num.samples = 2 policy-xacml-pdp | metrics.recording.level = INFO policy-xacml-pdp | metrics.sample.window.ms = 30000 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-xacml-pdp | receive.buffer.bytes = 65536 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 policy-xacml-pdp | reconnect.backoff.ms = 50 policy-xacml-pdp | request.timeout.ms = 30000 policy-xacml-pdp | retry.backoff.max.ms = 1000 policy-xacml-pdp | retry.backoff.ms = 100 policy-xacml-pdp | sasl.client.callback.handler.class = null policy-xacml-pdp | sasl.jaas.config = null policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | sasl.kerberos.service.name = null policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-xacml-pdp | sasl.login.callback.handler.class = null policy-xacml-pdp | sasl.login.class = null policy-xacml-pdp | sasl.login.connect.timeout.ms = null policy-xacml-pdp | sasl.login.read.timeout.ms = null policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 policy-xacml-pdp | sasl.mechanism = GSSAPI policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null policy-xacml-pdp | security.protocol = PLAINTEXT policy-xacml-pdp | security.providers = null policy-xacml-pdp | send.buffer.bytes = 131072 policy-xacml-pdp | session.timeout.ms = 45000 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 policy-xacml-pdp | ssl.cipher.suites = null policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-xacml-pdp | ssl.endpoint.identification.algorithm = https policy-xacml-pdp | ssl.engine.factory.class = null policy-xacml-pdp | ssl.key.password = null policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 policy-xacml-pdp | ssl.keystore.certificate.chain = null policy-xacml-pdp | ssl.keystore.key = null policy-xacml-pdp | ssl.keystore.location = null policy-xacml-pdp | ssl.keystore.password = null policy-xacml-pdp | ssl.keystore.type = JKS policy-xacml-pdp | ssl.protocol = TLSv1.3 policy-xacml-pdp | ssl.provider = null policy-xacml-pdp | ssl.secure.random.implementation = null policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX policy-xacml-pdp | ssl.truststore.certificates = null policy-xacml-pdp | ssl.truststore.location = null policy-xacml-pdp | ssl.truststore.password = null policy-xacml-pdp | ssl.truststore.type = JKS policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | policy-xacml-pdp | [2025-06-13T14:57:14.223+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-xacml-pdp | [2025-06-13T14:57:14.366+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-xacml-pdp | [2025-06-13T14:57:14.366+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-xacml-pdp | [2025-06-13T14:57:14.366+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826634365 policy-xacml-pdp | [2025-06-13T14:57:14.369+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-1, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Subscribed to topic(s): policy-pdp-pap policy-xacml-pdp | [2025-06-13T14:57:14.431+00:00|INFO|XacmlPdpApplicationManager|main] Initialization applications org.onap.policy.pdpx.main.parameters.XacmlApplicationParameters@7ec3394b JerseyClient(name=policyApiParameters, https=false, selfSignedCerts=false, hostname=policy-api, port=6969, basePath=null, userName=policyadmin, password=zb!XztG34, client=org.glassfish.jersey.client.JerseyClient@698122b2, baseUrl=http://policy-api:6969/, alive=true) policy-xacml-pdp | [2025-06-13T14:57:14.443+00:00|INFO|XacmlPdpApplicationManager|main] Application guard supports [onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0] policy-xacml-pdp | [2025-06-13T14:57:14.443+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath guard at this path /opt/app/policy/pdpx/apps/guard policy-xacml-pdp | [2025-06-13T14:57:14.443+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/guard policy-xacml-pdp | [2025-06-13T14:57:14.444+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/guard/xacml.properties policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.persistenceunit -> OperationsHistoryPU policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.name -> GetOperationOutcome policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.description -> Returns operation outcome policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.description -> Returns operation counts based on time window policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.password -> policy_user policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.issuer -> urn:org:onap:xacml:guard:get-operation-outcome policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.persistenceunit -> OperationsHistoryPU policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.driver -> org.postgresql.Driver policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.name -> CountRecentOperations policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.url -> jdbc:postgresql://postgres:5432/operationshistory policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.user -> policy_user policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.issuer -> urn:org:onap:xacml:guard:count-recent-operations policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] xacml.pip.engines -> count-recent-operations,get-operation-outcome policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|StdXacmlApplicationServiceProvider|main] {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-13T14:57:14.448+00:00|WARN|XACMLProperties|main] Properties file /usr/lib/jvm/java-17-openjdk/lib/xacml.properties cannot be read. policy-xacml-pdp | [2025-06-13T14:57:14.476+00:00|INFO|XacmlPdpApplicationManager|main] Application optimization supports [onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0] policy-xacml-pdp | [2025-06-13T14:57:14.476+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath optimization at this path /opt/app/policy/pdpx/apps/optimization policy-xacml-pdp | [2025-06-13T14:57:14.476+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/optimization policy-xacml-pdp | [2025-06-13T14:57:14.476+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/optimization/xacml.properties policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-13T14:57:14.478+00:00|INFO|XacmlPdpApplicationManager|main] Application naming supports [onap.policies.Naming 1.0.0] policy-xacml-pdp | [2025-06-13T14:57:14.478+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath naming at this path /opt/app/policy/pdpx/apps/naming policy-xacml-pdp | [2025-06-13T14:57:14.478+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/naming policy-xacml-pdp | [2025-06-13T14:57:14.478+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/naming/xacml.properties policy-xacml-pdp | [2025-06-13T14:57:14.478+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-13T14:57:14.481+00:00|INFO|XacmlPdpApplicationManager|main] Application native supports [onap.policies.native.Xacml 1.0.0, onap.policies.native.ToscaXacml 1.0.0] policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath native at this path /opt/app/policy/pdpx/apps/native policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/native policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/native/xacml.properties policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPdpApplicationManager|main] Application match supports [onap.policies.Match 1.0.0] policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath match at this path /opt/app/policy/pdpx/apps/match policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/match policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/match/xacml.properties policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPdpApplicationManager|main] Application monitoring supports [onap.Monitoring 1.0.0] policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath monitoring at this path /opt/app/policy/pdpx/apps/monitoring policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/monitoring policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/monitoring/xacml.properties policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-13T14:57:14.486+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-13T14:57:14.486+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-13T14:57:14.486+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-13T14:57:14.486+00:00|INFO|XacmlPdpApplicationManager|main] Finished applications initialization {optimize=org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplication@2b95e48b, native=org.onap.policy.xacml.pdp.application.nativ.NativePdpApplication@4a3329b9, guard=org.onap.policy.xacml.pdp.application.guard.GuardPdpApplication@3dddefd8, naming=org.onap.policy.xacml.pdp.application.naming.NamingPdpApplication@160ac7fb, match=org.onap.policy.xacml.pdp.application.match.MatchPdpApplication@12bfd80d, configure=org.onap.policy.xacml.pdp.application.monitoring.MonitoringPdpApplication@41925502} policy-xacml-pdp | [2025-06-13T14:57:14.503+00:00|INFO|XacmlPdpHearbeatPublisher|main] heartbeat topic probe 4000ms policy-xacml-pdp | [2025-06-13T14:57:14.694+00:00|INFO|ServiceManager|main] service manager starting policy-xacml-pdp | [2025-06-13T14:57:14.694+00:00|INFO|ServiceManager|main] service manager starting XACML PDP parameters policy-xacml-pdp | [2025-06-13T14:57:14.695+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-xacml-pdp | [2025-06-13T14:57:14.695+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bcceede6-cf80-4e3b-b200-9e273dce58d5, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5f574cc2 policy-xacml-pdp | [2025-06-13T14:57:14.708+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bcceede6-cf80-4e3b-b200-9e273dce58d5, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-xacml-pdp | [2025-06-13T14:57:14.709+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-xacml-pdp | allow.auto.create.topics = true policy-xacml-pdp | auto.commit.interval.ms = 5000 policy-xacml-pdp | auto.include.jmx.reporter = true policy-xacml-pdp | auto.offset.reset = latest policy-xacml-pdp | bootstrap.servers = [kafka:9092] policy-xacml-pdp | check.crcs = true policy-xacml-pdp | client.dns.lookup = use_all_dns_ips policy-xacml-pdp | client.id = consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2 policy-xacml-pdp | client.rack = policy-xacml-pdp | connections.max.idle.ms = 540000 policy-xacml-pdp | default.api.timeout.ms = 60000 policy-xacml-pdp | enable.auto.commit = true policy-xacml-pdp | enable.metrics.push = true policy-xacml-pdp | exclude.internal.topics = true policy-xacml-pdp | fetch.max.bytes = 52428800 policy-xacml-pdp | fetch.max.wait.ms = 500 policy-xacml-pdp | fetch.min.bytes = 1 policy-xacml-pdp | group.id = bcceede6-cf80-4e3b-b200-9e273dce58d5 policy-xacml-pdp | group.instance.id = null policy-xacml-pdp | group.protocol = classic policy-xacml-pdp | group.remote.assignor = null policy-xacml-pdp | heartbeat.interval.ms = 3000 policy-xacml-pdp | interceptor.classes = [] policy-xacml-pdp | internal.leave.group.on.close = true policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-xacml-pdp | isolation.level = read_uncommitted policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | max.partition.fetch.bytes = 1048576 policy-xacml-pdp | max.poll.interval.ms = 300000 policy-xacml-pdp | max.poll.records = 500 policy-xacml-pdp | metadata.max.age.ms = 300000 policy-xacml-pdp | metadata.recovery.strategy = none policy-xacml-pdp | metric.reporters = [] policy-xacml-pdp | metrics.num.samples = 2 policy-xacml-pdp | metrics.recording.level = INFO policy-xacml-pdp | metrics.sample.window.ms = 30000 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-xacml-pdp | receive.buffer.bytes = 65536 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 policy-xacml-pdp | reconnect.backoff.ms = 50 policy-xacml-pdp | request.timeout.ms = 30000 policy-xacml-pdp | retry.backoff.max.ms = 1000 policy-xacml-pdp | retry.backoff.ms = 100 policy-xacml-pdp | sasl.client.callback.handler.class = null policy-xacml-pdp | sasl.jaas.config = null policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | sasl.kerberos.service.name = null policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-xacml-pdp | sasl.login.callback.handler.class = null policy-xacml-pdp | sasl.login.class = null policy-xacml-pdp | sasl.login.connect.timeout.ms = null policy-xacml-pdp | sasl.login.read.timeout.ms = null policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 policy-xacml-pdp | sasl.mechanism = GSSAPI policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null policy-xacml-pdp | security.protocol = PLAINTEXT policy-xacml-pdp | security.providers = null policy-xacml-pdp | send.buffer.bytes = 131072 policy-xacml-pdp | session.timeout.ms = 45000 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 policy-xacml-pdp | ssl.cipher.suites = null policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-xacml-pdp | ssl.endpoint.identification.algorithm = https policy-xacml-pdp | ssl.engine.factory.class = null policy-xacml-pdp | ssl.key.password = null policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 policy-xacml-pdp | ssl.keystore.certificate.chain = null policy-xacml-pdp | ssl.keystore.key = null policy-xacml-pdp | ssl.keystore.location = null policy-xacml-pdp | ssl.keystore.password = null policy-xacml-pdp | ssl.keystore.type = JKS policy-xacml-pdp | ssl.protocol = TLSv1.3 policy-xacml-pdp | ssl.provider = null policy-xacml-pdp | ssl.secure.random.implementation = null policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX policy-xacml-pdp | ssl.truststore.certificates = null policy-xacml-pdp | ssl.truststore.location = null policy-xacml-pdp | ssl.truststore.password = null policy-xacml-pdp | ssl.truststore.type = JKS policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | policy-xacml-pdp | [2025-06-13T14:57:14.710+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-xacml-pdp | [2025-06-13T14:57:14.721+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-xacml-pdp | [2025-06-13T14:57:14.721+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-xacml-pdp | [2025-06-13T14:57:14.721+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826634721 policy-xacml-pdp | [2025-06-13T14:57:14.722+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Subscribed to topic(s): policy-pdp-pap policy-xacml-pdp | [2025-06-13T14:57:14.722+00:00|INFO|ServiceManager|main] service manager starting topics policy-xacml-pdp | [2025-06-13T14:57:14.723+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bcceede6-cf80-4e3b-b200-9e273dce58d5, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-xacml-pdp | [2025-06-13T14:57:14.723+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e0036ebe-920b-4b5e-8391-fea799397d17, alive=false, publisher=null]]: starting policy-xacml-pdp | [2025-06-13T14:57:14.733+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-xacml-pdp | acks = -1 policy-xacml-pdp | auto.include.jmx.reporter = true policy-xacml-pdp | batch.size = 16384 policy-xacml-pdp | bootstrap.servers = [kafka:9092] policy-xacml-pdp | buffer.memory = 33554432 policy-xacml-pdp | client.dns.lookup = use_all_dns_ips policy-xacml-pdp | client.id = producer-1 policy-xacml-pdp | compression.gzip.level = -1 policy-xacml-pdp | compression.lz4.level = 9 policy-xacml-pdp | compression.type = none policy-xacml-pdp | compression.zstd.level = 3 policy-xacml-pdp | connections.max.idle.ms = 540000 policy-xacml-pdp | delivery.timeout.ms = 120000 policy-xacml-pdp | enable.idempotence = true policy-xacml-pdp | enable.metrics.push = true policy-xacml-pdp | interceptor.classes = [] policy-xacml-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-xacml-pdp | linger.ms = 0 policy-xacml-pdp | max.block.ms = 60000 policy-xacml-pdp | max.in.flight.requests.per.connection = 5 policy-xacml-pdp | max.request.size = 1048576 policy-xacml-pdp | metadata.max.age.ms = 300000 policy-xacml-pdp | metadata.max.idle.ms = 300000 policy-xacml-pdp | metadata.recovery.strategy = none policy-xacml-pdp | metric.reporters = [] policy-xacml-pdp | metrics.num.samples = 2 policy-xacml-pdp | metrics.recording.level = INFO policy-xacml-pdp | metrics.sample.window.ms = 30000 policy-xacml-pdp | partitioner.adaptive.partitioning.enable = true policy-xacml-pdp | partitioner.availability.timeout.ms = 0 policy-xacml-pdp | partitioner.class = null policy-xacml-pdp | partitioner.ignore.keys = false policy-xacml-pdp | receive.buffer.bytes = 32768 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 policy-xacml-pdp | reconnect.backoff.ms = 50 policy-xacml-pdp | request.timeout.ms = 30000 policy-xacml-pdp | retries = 2147483647 policy-xacml-pdp | retry.backoff.max.ms = 1000 policy-xacml-pdp | retry.backoff.ms = 100 policy-xacml-pdp | sasl.client.callback.handler.class = null policy-xacml-pdp | sasl.jaas.config = null policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | sasl.kerberos.service.name = null policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-xacml-pdp | sasl.login.callback.handler.class = null policy-xacml-pdp | sasl.login.class = null policy-xacml-pdp | sasl.login.connect.timeout.ms = null policy-xacml-pdp | sasl.login.read.timeout.ms = null policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 policy-xacml-pdp | sasl.mechanism = GSSAPI policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null policy-xacml-pdp | security.protocol = PLAINTEXT policy-xacml-pdp | security.providers = null policy-xacml-pdp | send.buffer.bytes = 131072 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 policy-xacml-pdp | ssl.cipher.suites = null policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-xacml-pdp | ssl.endpoint.identification.algorithm = https policy-xacml-pdp | ssl.engine.factory.class = null policy-xacml-pdp | ssl.key.password = null policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 policy-xacml-pdp | ssl.keystore.certificate.chain = null policy-xacml-pdp | ssl.keystore.key = null policy-xacml-pdp | ssl.keystore.location = null policy-xacml-pdp | ssl.keystore.password = null policy-xacml-pdp | ssl.keystore.type = JKS policy-xacml-pdp | ssl.protocol = TLSv1.3 policy-xacml-pdp | ssl.provider = null policy-xacml-pdp | ssl.secure.random.implementation = null policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX policy-xacml-pdp | ssl.truststore.certificates = null policy-xacml-pdp | ssl.truststore.location = null policy-xacml-pdp | ssl.truststore.password = null policy-xacml-pdp | ssl.truststore.type = JKS policy-xacml-pdp | transaction.timeout.ms = 60000 policy-xacml-pdp | transactional.id = null policy-xacml-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-xacml-pdp | policy-xacml-pdp | [2025-06-13T14:57:14.734+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-xacml-pdp | [2025-06-13T14:57:14.742+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-xacml-pdp | [2025-06-13T14:57:14.762+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-xacml-pdp | [2025-06-13T14:57:14.762+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-xacml-pdp | [2025-06-13T14:57:14.762+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826634762 policy-xacml-pdp | [2025-06-13T14:57:14.762+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e0036ebe-920b-4b5e-8391-fea799397d17, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-xacml-pdp | [2025-06-13T14:57:14.763+00:00|INFO|ServiceManager|main] service manager starting Terminate PDP policy-xacml-pdp | [2025-06-13T14:57:14.763+00:00|INFO|ServiceManager|main] service manager starting Heartbeat Publisher policy-xacml-pdp | [2025-06-13T14:57:14.763+00:00|INFO|ServiceManager|main] service manager starting REST Server policy-xacml-pdp | [2025-06-13T14:57:14.763+00:00|INFO|ServiceManager|main] service manager starting policy-xacml-pdp | [2025-06-13T14:57:14.763+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-xacml-pdp | [2025-06-13T14:57:14.772+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bcceede6-cf80-4e3b-b200-9e273dce58d5, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: registering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007f9d572ae2e8@357358c2 policy-xacml-pdp | [2025-06-13T14:57:14.772+00:00|INFO|SingleThreadedBusTopicSource|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bcceede6-cf80-4e3b-b200-9e273dce58d5, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=2]]]]: register: start not attempted policy-xacml-pdp | [2025-06-13T14:57:14.763+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING policy-xacml-pdp | [2025-06-13T14:57:14.774+00:00|INFO|ServiceManager|main] service manager started policy-xacml-pdp | [2025-06-13T14:57:14.774+00:00|INFO|ServiceManager|main] service manager started policy-xacml-pdp | [2025-06-13T14:57:14.775+00:00|INFO|Main|main] Started policy-xacml-pdp service successfully. policy-xacml-pdp | [2025-06-13T14:57:14.775+00:00|INFO|OrderedServiceImpl|pool-2-thread-1] ***** OrderedServiceImpl implementers: policy-xacml-pdp | [] policy-xacml-pdp | [2025-06-13T14:57:14.774+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN policy-xacml-pdp | [2025-06-13T14:57:14.777+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"e18e1fff-9deb-4367-a557-a7dc64389e1f","timestampMs":1749826634765,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e"} policy-xacml-pdp | [2025-06-13T14:57:15.107+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Cluster ID: d-rF8NzzQdGshpvqUU-qrg policy-xacml-pdp | [2025-06-13T14:57:15.107+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: d-rF8NzzQdGshpvqUU-qrg policy-xacml-pdp | [2025-06-13T14:57:15.108+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-xacml-pdp | [2025-06-13T14:57:15.109+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-xacml-pdp | [2025-06-13T14:57:15.115+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] (Re-)joining group policy-xacml-pdp | [2025-06-13T14:57:15.131+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Request joining group due to: need to re-join with the given member-id: consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11 policy-xacml-pdp | [2025-06-13T14:57:15.132+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] (Re-)joining group policy-xacml-pdp | [2025-06-13T14:57:15.330+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-xacml-pdp | [2025-06-13T14:57:15.330+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-xacml-pdp | [2025-06-13T14:57:18.137+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Successfully joined group with generation Generation{generationId=1, memberId='consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11', protocol='range'} policy-xacml-pdp | [2025-06-13T14:57:18.145+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Finished assignment for group at generation 1: {consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11=Assignment(partitions=[policy-pdp-pap-0])} policy-xacml-pdp | [2025-06-13T14:57:18.154+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Successfully synced group in generation Generation{generationId=1, memberId='consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11', protocol='range'} policy-xacml-pdp | [2025-06-13T14:57:18.154+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-xacml-pdp | [2025-06-13T14:57:18.156+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Adding newly assigned partitions: policy-pdp-pap-0 policy-xacml-pdp | [2025-06-13T14:57:18.164+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Found no committed offset for partition policy-pdp-pap-0 policy-xacml-pdp | [2025-06-13T14:57:18.175+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-xacml-pdp | [2025-06-13T14:57:19.202+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"e18e1fff-9deb-4367-a557-a7dc64389e1f","timestampMs":1749826634765,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e"} policy-xacml-pdp | [2025-06-13T14:57:19.247+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"e18e1fff-9deb-4367-a557-a7dc64389e1f","timestampMs":1749826634765,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e"} policy-xacml-pdp | [2025-06-13T14:57:19.250+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK policy-xacml-pdp | [2025-06-13T14:57:19.251+00:00|INFO|BidirectionalTopicClient|KAFKA-source-policy-pdp-pap] topic policy-pdp-pap is ready; found matching message PdpTopicCheck(super=PdpMessage(messageName=PDP_TOPIC_CHECK, requestId=e18e1fff-9deb-4367-a557-a7dc64389e1f, timestampMs=1749826634765, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=null, pdpSubgroup=null)) policy-xacml-pdp | [2025-06-13T14:57:19.256+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bcceede6-cf80-4e3b-b200-9e273dce58d5, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=1, locked=false, #topicListeners=2]]]]: unregistering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007f9d572ae2e8@357358c2 policy-xacml-pdp | [2025-06-13T14:57:19.258+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=8b7db761-4d49-42ed-9835-fab8afcf3c0a, timestampMs=1749826639257, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=null), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[], deploymentInstanceInfo=null, properties=null, response=null) policy-xacml-pdp | [2025-06-13T14:57:19.264+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"8b7db761-4d49-42ed-9835-fab8afcf3c0a","timestampMs":1749826639257,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup"} policy-xacml-pdp | [2025-06-13T14:57:19.282+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"8b7db761-4d49-42ed-9835-fab8afcf3c0a","timestampMs":1749826639257,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup"} policy-xacml-pdp | [2025-06-13T14:57:19.282+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-13T14:57:19.923+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"243683ae-56ab-4597-926a-fcce27e0e31d","timestampMs":1749826639850,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:57:19.931+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=243683ae-56ab-4597-926a-fcce27e0e31d, timestampMs=1749826639850, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-8d981b63-9064-4d54-8468-b1eb1f91dc26, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.Naming, typeVersion=1.0.0, properties={policy-instance-name=ONAP_NF_NAMING_TIMESTAMP, naming-models=[{naming-type=VNF, naming-recipe=AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP, name-operation=to_lower_case(), naming-properties=[{property-name=AIC_CLOUD_REGION}, {property-name=CONSTANT, property-value=onap-nf}, {property-name=TIMESTAMP}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VNFC, naming-recipe=VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=ENTIRETY, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}, {property-name=NFC_NAMING_CODE}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VF-MODULE, naming-recipe=VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-value=-, property-name=DELIMITER}, {property-name=VF_MODULE_LABEL}, {property-name=VF_MODULE_TYPE}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=PRECEEDING, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}]}]}))], policiesToBeUndeployed=[]) policy-xacml-pdp | [2025-06-13T14:57:19.940+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP type: onap.policies.Naming weight: null policy: policy-xacml-pdp | {"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}} policy-xacml-pdp | [2025-06-13T14:57:20.017+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.Naming policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.Naming policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 1.0.0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.Naming policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-13T14:57:20.023+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/naming/xacml.properties policy-xacml-pdp | [2025-06-13T14:57:20.030+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP, policy-version=1.0.0} into application naming policy-xacml-pdp | [2025-06-13T14:57:20.031+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"243683ae-56ab-4597-926a-fcce27e0e31d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"587a891a-49c5-4bf1-8169-985183639997","timestampMs":1749826640030,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:57:20.037+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=55ff5c69-c399-49d4-a95f-d4c543d908a0, timestampMs=1749826640037, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) policy-xacml-pdp | [2025-06-13T14:57:20.038+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"55ff5c69-c399-49d4-a95f-d4c543d908a0","timestampMs":1749826640037,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:57:20.044+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"243683ae-56ab-4597-926a-fcce27e0e31d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"587a891a-49c5-4bf1-8169-985183639997","timestampMs":1749826640030,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:57:20.045+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-13T14:57:20.053+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"55ff5c69-c399-49d4-a95f-d4c543d908a0","timestampMs":1749826640037,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:57:20.053+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-13T14:57:20.071+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","timestampMs":1749826639851,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:57:20.072+00:00|INFO|XacmlPdpStateChangeListener|KAFKA-source-policy-pdp-pap] PDP State Change message has been received from the PAP - PdpStateChange(super=PdpMessage(messageName=PDP_STATE_CHANGE, requestId=d49dcaf6-23f5-41e2-86f9-c004bd57c4bb, timestampMs=1749826639851, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-8d981b63-9064-4d54-8468-b1eb1f91dc26, state=ACTIVE) policy-xacml-pdp | [2025-06-13T14:57:20.073+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] set state of org.onap.policy.pdpx.main.XacmlState@1db4588b to ACTIVE policy-xacml-pdp | [2025-06-13T14:57:20.073+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] State change: ACTIVE - Starting rest controller policy-xacml-pdp | [2025-06-13T14:57:20.073+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"72e98077-ce68-4257-9fdb-7e7ad741339a","timestampMs":1749826640073,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:57:20.086+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"72e98077-ce68-4257-9fdb-7e7ad741339a","timestampMs":1749826640073,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:57:20.086+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-13T14:57:20.638+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","timestampMs":1749826640376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:57:20.639+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=63c49f14-f1a0-4743-8e01-8dc98e4cfb41, timestampMs=1749826640376, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-8d981b63-9064-4d54-8468-b1eb1f91dc26, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[], policiesToBeUndeployed=[]) policy-xacml-pdp | [2025-06-13T14:57:20.639+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"93d49d05-9ee7-4d6b-9028-491a1ccee074","timestampMs":1749826640639,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:57:20.650+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"93d49d05-9ee7-4d6b-9028-491a1ccee074","timestampMs":1749826640639,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:57:20.650+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-13T14:57:35.675+00:00|INFO|RequestLog|qtp2014233765-30] 172.17.0.4 - policyadmin [13/Jun/2025:14:57:35 +0000] "GET /metrics HTTP/1.1" 200 2118 "" "Prometheus/3.4.1" policy-xacml-pdp | [2025-06-13T14:57:42.731+00:00|INFO|RequestLog|qtp2014233765-29] 172.17.0.1 - - [13/Jun/2025:14:57:42 +0000] "GET / HTTP/1.1" 401 423 "" "curl/7.58.0" policy-xacml-pdp | [2025-06-13T14:58:26.049+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:26 +0000] "GET /policy/pdpx/v1/healthcheck?null HTTP/1.1" 200 110 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-13T14:58:26.072+00:00|INFO|RequestLog|qtp2014233765-27] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:26 +0000] "GET /metrics?null HTTP/1.1" 200 2042 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-13T14:58:27.547+00:00|INFO|GuardTranslator|qtp2014233765-27] Converting Request DecisionRequest(onapName=Guard, onapComponent=Guard-component, onapInstance=Guard-component-instance, requestId=unique-request-guard-1, context=null, action=guard, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={guard={actor=APPC, operation=ModifyConfig, target=f17face5-69cb-4c88-9e0b-7426db7edddd, requestId=c7c6a4aa-bb61-4a15-b831-ba1472dd4a65, clname=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}}) policy-xacml-pdp | [2025-06-13T14:58:27.567+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-dateTime policy-xacml-pdp | [2025-06-13T14:58:27.567+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-date policy-xacml-pdp | [2025-06-13T14:58:27.567+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-time policy-xacml-pdp | [2025-06-13T14:58:27.567+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:timezone policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:vf-count policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-name policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-id policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-type policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.nf-naming-code policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:vserver.vserver-id policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:cloud-region.cloud-region-id policy-xacml-pdp | [2025-06-13T14:58:27.573+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Constructed using properties {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-13T14:58:27.573+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-13T14:58:27.573+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Combining root policies with urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides policy-xacml-pdp | [2025-06-13T14:58:27.579+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Root Policies: 1 policy-xacml-pdp | [2025-06-13T14:58:27.579+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Referenced Policies: 0 policy-xacml-pdp | [2025-06-13T14:58:27.580+00:00|INFO|StdPolicyFinder|qtp2014233765-27] Updating policy map with policy efa1dcb1-71d0-4b50-b930-711c0f3c432e version 1.0 policy-xacml-pdp | [2025-06-13T14:58:27.584+00:00|INFO|StdOnapPip|qtp2014233765-27] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-13T14:58:27.671+00:00|INFO|LogHelper|qtp2014233765-27] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] policy-xacml-pdp | [2025-06-13T14:58:27.711+00:00|INFO|Version|qtp2014233765-27] HHH000412: Hibernate ORM core version 6.6.16.Final policy-xacml-pdp | [2025-06-13T14:58:27.733+00:00|INFO|RegionFactoryInitiator|qtp2014233765-27] HHH000026: Second-level cache disabled policy-xacml-pdp | [2025-06-13T14:58:27.875+00:00|WARN|pooling|qtp2014233765-27] HHH10001002: Using built-in connection pool (not intended for production use) policy-xacml-pdp | [2025-06-13T14:58:28.105+00:00|INFO|pooling|qtp2014233765-27] HHH10001005: Database info: policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] policy-xacml-pdp | Database driver: org.postgresql.Driver policy-xacml-pdp | Database version: 16.4 policy-xacml-pdp | Autocommit mode: false policy-xacml-pdp | Isolation level: undefined/unknown policy-xacml-pdp | Minimum pool size: 1 policy-xacml-pdp | Maximum pool size: 20 policy-xacml-pdp | [2025-06-13T14:58:28.952+00:00|INFO|JtaPlatformInitiator|qtp2014233765-27] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-xacml-pdp | [2025-06-13T14:58:28.985+00:00|INFO|StdOnapPip|qtp2014233765-27] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-13T14:58:28.988+00:00|INFO|LogHelper|qtp2014233765-27] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] policy-xacml-pdp | [2025-06-13T14:58:28.990+00:00|INFO|RegionFactoryInitiator|qtp2014233765-27] HHH000026: Second-level cache disabled policy-xacml-pdp | [2025-06-13T14:58:29.007+00:00|WARN|pooling|qtp2014233765-27] HHH10001002: Using built-in connection pool (not intended for production use) policy-xacml-pdp | [2025-06-13T14:58:29.043+00:00|INFO|pooling|qtp2014233765-27] HHH10001005: Database info: policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] policy-xacml-pdp | Database driver: org.postgresql.Driver policy-xacml-pdp | Database version: 16.4 policy-xacml-pdp | Autocommit mode: false policy-xacml-pdp | Isolation level: undefined/unknown policy-xacml-pdp | Minimum pool size: 1 policy-xacml-pdp | Maximum pool size: 20 policy-xacml-pdp | [2025-06-13T14:58:29.074+00:00|INFO|JtaPlatformInitiator|qtp2014233765-27] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-xacml-pdp | [2025-06-13T14:58:29.078+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-27] Elapsed Time: 1510ms policy-xacml-pdp | [2025-06-13T14:58:29.078+00:00|INFO|GuardTranslator|qtp2014233765-27] Converting Response {results=[{decision=NotApplicable,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component-instance}],includeInResults=true}{attributeId=urn:org:onap:guard:request:request-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=unique-request-guard-1}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:guard:clname:clname-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}],includeInResults=true}{attributeId=urn:org:onap:guard:actor:actor-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=APPC}],includeInResults=true}{attributeId=urn:org:onap:guard:operation:operation-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ModifyConfig}],includeInResults=true}{attributeId=urn:org:onap:guard:target:target-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=f17face5-69cb-4c88-9e0b-7426db7edddd}],includeInResults=true}]}]}]} policy-xacml-pdp | [2025-06-13T14:58:29.084+00:00|INFO|RequestLog|qtp2014233765-27] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:27 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 19 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-13T14:58:29.684+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6a5c2c9f-6c22-44fe-904b-515d314bb708","timestampMs":1749826709625,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:58:29.686+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=6a5c2c9f-6c22-44fe-904b-515d314bb708, timestampMs=1749826709625, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-8d981b63-9064-4d54-8468-b1eb1f91dc26, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.monitoring.tcagen2, typeVersion=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}})), ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.optimization.resource.AffinityPolicy, typeVersion=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}))], policiesToBeUndeployed=[]) policy-xacml-pdp | [2025-06-13T14:58:29.687+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: onap.restart.tca type: onap.policies.monitoring.tcagen2 weight: null policy: policy-xacml-pdp | {"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}} policy-xacml-pdp | [2025-06-13T14:58:29.723+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.restart.tca policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.monitoring.tcagen2 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.monitoring.tcagen2 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 1.0.0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.restart.tca policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.monitoring.tcagen2 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-13T14:58:29.723+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties policy-xacml-pdp | [2025-06-13T14:58:29.724+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} into application monitoring policy-xacml-pdp | [2025-06-13T14:58:29.724+00:00|INFO|OptimizationPdpApplication|KAFKA-source-policy-pdp-pap] optimization can support onap.policies.optimization.resource.AffinityPolicy 1.0.0 policy-xacml-pdp | [2025-06-13T14:58:29.725+00:00|ERROR|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] PolicyType not found in data area yet /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml policy-xacml-pdp | java.nio.file.NoSuchFileException: /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) policy-xacml-pdp | at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:218) policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:380) policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:432) policy-xacml-pdp | at java.base/java.nio.file.Files.readAllBytes(Files.java:3288) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.loadPolicyType(StdMatchableTranslator.java:515) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.findPolicyType(StdMatchableTranslator.java:480) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.convertPolicy(StdMatchableTranslator.java:241) policy-xacml-pdp | at org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplicationTranslator.convertPolicy(OptimizationPdpApplicationTranslator.java:72) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdXacmlApplicationServiceProvider.loadPolicy(StdXacmlApplicationServiceProvider.java:127) policy-xacml-pdp | at org.onap.policy.pdpx.main.rest.XacmlPdpApplicationManager.loadDeployedPolicy(XacmlPdpApplicationManager.java:199) policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.XacmlPdpUpdatePublisher.handlePdpUpdate(XacmlPdpUpdatePublisher.java:91) policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:72) policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:36) policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.ScoListener.onTopicEvent(ScoListener.java:75) policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher.onTopicEvent(MessageTypeDispatcher.java:97) policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.JsonListener.onTopicEvent(JsonListener.java:61) policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.TopicBase.broadcast(TopicBase.java:170) policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.fetchAllMessages(SingleThreadedBusTopicSource.java:252) policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.run(SingleThreadedBusTopicSource.java:235) policy-xacml-pdp | at java.base/java.lang.Thread.run(Thread.java:840) policy-xacml-pdp | [2025-06-13T14:58:29.773+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls policy-xacml-pdp | [2025-06-13T14:58:29.775+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls policy-xacml-pdp | [2025-06-13T14:58:30.137+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] Successfully pulled onap.policies.optimization.resource.AffinityPolicy 1.0.0 policy-xacml-pdp | [2025-06-13T14:58:30.169+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.resource.AffinityPolicy:1.0.0 policy-xacml-pdp | [2025-06-13T14:58:30.169+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Retrieving datatype policy.data.affinityProperties_properties policy-xacml-pdp | [2025-06-13T14:58:30.169+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.Resource:1.0.0 policy-xacml-pdp | [2025-06-13T14:58:30.170+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.Optimization:1.0.0 policy-xacml-pdp | [2025-06-13T14:58:30.170+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Found root - done scanning policy-xacml-pdp | [2025-06-13T14:58:30.170+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: OSDF_CASABLANCA.Affinity_Default type: onap.policies.optimization.resource.AffinityPolicy weight: 0 policy: policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} policy-xacml-pdp | [2025-06-13T14:58:30.189+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | IF exists and is equal policy-xacml-pdp | policy-xacml-pdp | Does the policy-type attribute exist? policy-xacml-pdp | policy-xacml-pdp | Get the size of policy-type attributes policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Is this policy-type in the list? policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | OSDF_CASABLANCA.Affinity_Default policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-13T14:58:30.205+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | IF exists and is equal policy-xacml-pdp | policy-xacml-pdp | Does the policy-type attribute exist? policy-xacml-pdp | policy-xacml-pdp | Get the size of policy-type attributes policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Is this policy-type in the list? policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | OSDF_CASABLANCA.Affinity_Default policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-13T14:58:30.205+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/optimization/xacml.properties policy-xacml-pdp | [2025-06-13T14:58:30.205+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0} into application optimization policy-xacml-pdp | [2025-06-13T14:58:30.206+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"6a5c2c9f-6c22-44fe-904b-515d314bb708","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"800a49c0-b071-44e7-8819-4105949c61d2","timestampMs":1749826710206,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:58:30.236+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"6a5c2c9f-6c22-44fe-904b-515d314bb708","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"800a49c0-b071-44e7-8819-4105949c61d2","timestampMs":1749826710206,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:58:30.236+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-13T14:58:35.580+00:00|INFO|RequestLog|qtp2014233765-32] 172.17.0.4 - policyadmin [13/Jun/2025:14:58:35 +0000] "GET /metrics HTTP/1.1" 200 2159 "" "Prometheus/3.4.1" policy-xacml-pdp | [2025-06-13T14:58:53.879+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-26] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) policy-xacml-pdp | [2025-06-13T14:58:53.881+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:org:onap:policy-type policy-xacml-pdp | [2025-06-13T14:58:53.882+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | [2025-06-13T14:58:53.882+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-13T14:58:53.882+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-13T14:58:53.882+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Loading policy file /opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml policy-xacml-pdp | [2025-06-13T14:58:53.900+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Root Policies: 1 policy-xacml-pdp | [2025-06-13T14:58:53.900+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Referenced Policies: 0 policy-xacml-pdp | [2025-06-13T14:58:53.900+00:00|INFO|StdPolicyFinder|qtp2014233765-26] Updating policy map with policy f172ec62-5b8b-456a-9a1a-5fff266087b4 version 1.0 policy-xacml-pdp | [2025-06-13T14:58:53.900+00:00|INFO|StdPolicyFinder|qtp2014233765-26] Updating policy map with policy onap.restart.tca version 1.0.0 policy-xacml-pdp | [2025-06-13T14:58:53.916+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-26] Elapsed Time: 35ms policy-xacml-pdp | [2025-06-13T14:58:53.917+00:00|INFO|StdBaseTranslator|qtp2014233765-26] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=f172ec62-5b8b-456a-9a1a-5fff266087b4,version=1.0}]}]} policy-xacml-pdp | [2025-06-13T14:58:53.917+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-26] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-13T14:58:53.917+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-26] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator policy-xacml-pdp | [2025-06-13T14:58:53.917+00:00|INFO|MonitoringPdpApplication|qtp2014233765-26] Abbreviating decision results DecisionResponse(status=null, message=null, advice=null, obligations=null, policies={onap.restart.tca={type=onap.policies.monitoring.tcagen2, type_version=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}}, name=onap.restart.tca, version=1.0.0, metadata={policy-id=onap.restart.tca, policy-version=1.0.0}}}, attributes=null) policy-xacml-pdp | [2025-06-13T14:58:53.919+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:53 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 146 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-13T14:58:53.932+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) policy-xacml-pdp | [2025-06-13T14:58:53.932+00:00|WARN|RequestParser|qtp2014233765-29] Unable to extract attribute value from object: urn:org:onap:policy-type policy-xacml-pdp | [2025-06-13T14:58:53.933+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-29] Elapsed Time: 1ms policy-xacml-pdp | [2025-06-13T14:58:53.933+00:00|INFO|StdBaseTranslator|qtp2014233765-29] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=f172ec62-5b8b-456a-9a1a-5fff266087b4,version=1.0}]}]} policy-xacml-pdp | [2025-06-13T14:58:53.933+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-13T14:58:53.934+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator policy-xacml-pdp | [2025-06-13T14:58:53.934+00:00|INFO|MonitoringPdpApplication|qtp2014233765-29] Unsupported query param for Monitoring application: {null=[]} policy-xacml-pdp | [2025-06-13T14:58:53.936+00:00|INFO|RequestLog|qtp2014233765-29] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:53 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1055 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-13T14:58:53.945+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Converting Request DecisionRequest(onapName=SDNC, onapComponent=SDNC-component, onapInstance=SDNC-component-instance, requestId=unique-request-sdnc-1, context=null, action=naming, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={nfRole=[], naming-type=[], property-name=[], policy-type=[onap.policies.Naming]}) policy-xacml-pdp | [2025-06-13T14:58:53.945+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:resource:resource-id policy-xacml-pdp | [2025-06-13T14:58:53.945+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | [2025-06-13T14:58:53.945+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-13T14:58:53.945+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-13T14:58:53.946+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Loading policy file /opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml policy-xacml-pdp | [2025-06-13T14:58:53.954+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Root Policies: 1 policy-xacml-pdp | [2025-06-13T14:58:53.954+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Referenced Policies: 0 policy-xacml-pdp | [2025-06-13T14:58:53.954+00:00|INFO|StdPolicyFinder|qtp2014233765-30] Updating policy map with policy 393abb79-9d92-4638-bc69-1509d0a85b0d version 1.0 policy-xacml-pdp | [2025-06-13T14:58:53.954+00:00|INFO|StdPolicyFinder|qtp2014233765-30] Updating policy map with policy SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP version 1.0.0 policy-xacml-pdp | [2025-06-13T14:58:53.956+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-30] Elapsed Time: 11ms policy-xacml-pdp | [2025-06-13T14:58:53.956+00:00|INFO|StdBaseTranslator|qtp2014233765-30] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component-instance}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:policy-type,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}],includeInResults=true}]}],policyIdentifiers=[{id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP,version=1.0.0}],policySetIdentifiers=[{id=393abb79-9d92-4638-bc69-1509d0a85b0d,version=1.0}]}]} policy-xacml-pdp | [2025-06-13T14:58:53.956+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-13T14:58:53.956+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator policy-xacml-pdp | [2025-06-13T14:58:53.958+00:00|INFO|RequestLog|qtp2014233765-30] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:53 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1598 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-13T14:58:53.972+00:00|INFO|StdMatchableTranslator|qtp2014233765-28] Converting Request DecisionRequest(onapName=OOF, onapComponent=OOF-component, onapInstance=OOF-component-instance, requestId=null, context={subscriberName=[]}, action=optimize, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={scope=[], services=[], resources=[], geography=[]}) policy-xacml-pdp | [2025-06-13T14:58:53.975+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | [2025-06-13T14:58:53.975+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-13T14:58:53.975+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-13T14:58:53.975+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Loading policy file /opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml policy-xacml-pdp | [2025-06-13T14:58:53.982+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Root Policies: 1 policy-xacml-pdp | [2025-06-13T14:58:53.982+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Referenced Policies: 0 policy-xacml-pdp | [2025-06-13T14:58:53.982+00:00|INFO|StdPolicyFinder|qtp2014233765-28] Updating policy map with policy 2b759476-56ab-447d-ad39-2356793ff05b version 1.0 policy-xacml-pdp | [2025-06-13T14:58:53.982+00:00|INFO|StdPolicyFinder|qtp2014233765-28] Updating policy map with policy OSDF_CASABLANCA.Affinity_Default version 1.0.0 policy-xacml-pdp | [2025-06-13T14:58:53.983+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-28] Elapsed Time: 9ms policy-xacml-pdp | [2025-06-13T14:58:53.983+00:00|INFO|StdBaseTranslator|qtp2014233765-28] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OSDF_CASABLANCA.Affinity_Default}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:weight,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#integer,value=0}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.optimization.resource.AffinityPolicy}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component-instance}],includeInResults=true}]}],policyIdentifiers=[{id=OSDF_CASABLANCA.Affinity_Default,version=1.0.0}],policySetIdentifiers=[{id=2b759476-56ab-447d-ad39-2356793ff05b,version=1.0}]}]} policy-xacml-pdp | [2025-06-13T14:58:53.984+00:00|INFO|StdMatchableTranslator|qtp2014233765-28] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-13T14:58:53.984+00:00|INFO|StdMatchableTranslator|qtp2014233765-28] New entry onap.policies.optimization.resource.AffinityPolicy weight 0 policy-xacml-pdp | [2025-06-13T14:58:53.984+00:00|INFO|StdMatchableTranslator|qtp2014233765-28] Policy (OSDF_CASABLANCA.Affinity_Default,{type=onap.policies.optimization.resource.AffinityPolicy, type_version=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}, name=OSDF_CASABLANCA.Affinity_Default, version=1.0.0, metadata={policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0}}) policy-xacml-pdp | [2025-06-13T14:58:53.986+00:00|INFO|RequestLog|qtp2014233765-28] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:53 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 467 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-13T14:58:54.373+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","timestampMs":1749826734338,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:58:54.374+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=cb526d69-01bf-4ec2-b43b-e5796b06e4c5, timestampMs=1749826734338, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-8d981b63-9064-4d54-8468-b1eb1f91dc26, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[], policiesToBeUndeployed=[onap.restart.tca 1.0.0]) policy-xacml-pdp | [2025-06-13T14:58:54.374+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 policy-xacml-pdp | [2025-06-13T14:58:54.374+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 policy-xacml-pdp | [2025-06-13T14:58:54.374+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 policy-xacml-pdp | [2025-06-13T14:58:54.374+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 policy-xacml-pdp | [2025-06-13T14:58:54.374+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 policy-xacml-pdp | [2025-06-13T14:58:54.375+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties policy-xacml-pdp | [2025-06-13T14:58:54.376+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Unloaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} from application monitoring policy-xacml-pdp | [2025-06-13T14:58:54.376+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"fe9b2d1d-c5b0-4dd8-9c19-c42c7ad985ee","timestampMs":1749826734376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:58:54.383+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"fe9b2d1d-c5b0-4dd8-9c19-c42c7ad985ee","timestampMs":1749826734376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:58:54.383+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-13T14:59:20.051+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=eefa5f0e-984c-486a-a008-71aa56b4235b, timestampMs=1749826760051, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=ACTIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0, OSDF_CASABLANCA.Affinity_Default 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) policy-xacml-pdp | [2025-06-13T14:59:20.052+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"eefa5f0e-984c-486a-a008-71aa56b4235b","timestampMs":1749826760051,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:59:20.061+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"eefa5f0e-984c-486a-a008-71aa56b4235b","timestampMs":1749826760051,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-13T14:59:20.062+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-13T14:59:35.580+00:00|INFO|RequestLog|qtp2014233765-31] 172.17.0.4 - policyadmin [13/Jun/2025:14:59:35 +0000] "GET /metrics HTTP/1.1" 200 2211 "" "Prometheus/3.4.1" postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | waiting for server to start....2025-06-13 14:56:35.855 UTC [47] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-13 14:56:35.857 UTC [47] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-13 14:56:35.864 UTC [50] LOG: database system was shut down at 2025-06-13 14:56:35 UTC postgres | 2025-06-13 14:56:35.870 UTC [47] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | 2025-06-13 14:56:37.379 UTC [47] LOG: received fast shutdown request postgres | waiting for server to shut down....2025-06-13 14:56:37.382 UTC [47] LOG: aborting any active transactions postgres | 2025-06-13 14:56:37.384 UTC [47] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1 postgres | 2025-06-13 14:56:37.386 UTC [48] LOG: shutting down postgres | 2025-06-13 14:56:37.388 UTC [48] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-13 14:56:37.963 UTC [48] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.401 s, sync=0.164 s, total=0.578 s; sync files=1788, longest=0.038 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-13 14:56:37.974 UTC [47] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-13 14:56:38.011 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-13 14:56:38.011 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-13 14:56:38.011 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-13 14:56:38.017 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-13 14:56:38.026 UTC [100] LOG: database system was shut down at 2025-06-13 14:56:37 UTC postgres | 2025-06-13 14:56:38.030 UTC [1] LOG: database system is ready to accept connections prometheus | time=2025-06-13T14:56:37.008Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-13T14:56:37.008Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-13T14:56:37.008Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-13T14:56:37.010Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-13T14:56:37.012Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-13T14:56:37.013Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-13T14:56:37.014Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-13T14:56:37.014Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-13T14:56:37.018Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-13T14:56:37.018Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.21µs prometheus | time=2025-06-13T14:56:37.018Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-13T14:56:37.018Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=229.954µs prometheus | time=2025-06-13T14:56:37.018Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=84.231µs wal_replay_duration=257.024µs wbl_replay_duration=180ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.21µs total_replay_duration=449.116µs prometheus | time=2025-06-13T14:56:37.024Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-13T14:56:37.024Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-13T14:56:37.025Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-13T14:56:37.026Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-13T14:56:37.026Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.94µs remote_storage=4.69µs web_handler=600ns query_engine=1.37µs scrape=246.754µs scrape_sd=163.602µs notify=120.822µs notify_sd=19.72µs rules=1.62µs tracing=3.83µs filename=/etc/prometheus/prometheus.yml totalDuration=1.880837ms prometheus | time=2025-06-13T14:56:37.026Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-13T14:56:37.027Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-13 14:56:36,784] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:36,786] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:36,786] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:36,786] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:36,786] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:36,788] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 14:56:36,788] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 14:56:36,788] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 14:56:36,788] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-13 14:56:36,789] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-13 14:56:36,790] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:36,790] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:36,790] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:36,790] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:36,790] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 14:56:36,791] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-13 14:56:36,801] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-13 14:56:36,804] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-13 14:56:36,804] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-13 14:56:36,806] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 14:56:36,814] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,814] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,814] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,815] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,815] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,815] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,815] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,815] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,815] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,815] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,817] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,817] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,817] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,818] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-13 14:56:36,819] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,819] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,820] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-13 14:56:36,820] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-13 14:56:36,821] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:56:36,821] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:56:36,821] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:56:36,821] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:56:36,821] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:56:36,821] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 14:56:36,823] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,823] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,824] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-13 14:56:36,824] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-13 14:56:36,824] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:36,845] INFO Logging initialized @375ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-13 14:56:36,899] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 14:56:36,899] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 14:56:36,914] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-13 14:56:36,946] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 14:56:36,946] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 14:56:36,948] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 14:56:36,952] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-13 14:56:36,963] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 14:56:36,972] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-13 14:56:36,972] INFO Started @507ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-13 14:56:36,973] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-13 14:56:36,978] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-13 14:56:36,979] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-13 14:56:36,980] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-13 14:56:36,982] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-13 14:56:36,996] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-13 14:56:36,996] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-13 14:56:36,996] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 14:56:36,996] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 14:56:37,002] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-13 14:56:37,002] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 14:56:37,004] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 14:56:37,005] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 14:56:37,005] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 14:56:37,013] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-13 14:56:37,013] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-13 14:56:37,029] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-13 14:56:37,030] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-13 14:56:43,571] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container policy-csit Stopping Container grafana Stopping Container policy-xacml-pdp Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-xacml-pdp Stopped Container policy-xacml-pdp Removing Container policy-xacml-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2051 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins15209471800498681819.sh ---> sysstat.sh [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins8640850003505794288.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ']' + mkdir -p /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/archives/ [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins5019103323956257828.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-qCrN from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-qCrN/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins323986876715251989.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp@tmp/config17668754252716631788tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins4399119877412921294.sh ---> create-netrc.sh [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins6315400656309036051.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-qCrN from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-qCrN/bin to PATH [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins5286077448809273713.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins13351093357169172437.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-qCrN from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-qCrN/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash -l /tmp/jenkins2930700617725488237.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-qCrN from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-qCrN/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/816 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-20904 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 878 24291 0 6997 30833 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:53:53:2a brd ff:ff:ff:ff:ff:ff inet 10.30.107.73/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85985sec preferred_lft 85985sec inet6 fe80::f816:3eff:fe53:532a/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:59:2e:56:b8 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:59ff:fe2e:56b8/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20904) 06/13/25 _x86_64_ (8 CPU) 14:54:20 LINUX RESTART (8 CPU) 14:55:01 tps rtps wtps bread/s bwrtn/s 14:56:01 171.27 22.80 148.48 2287.22 75249.86 14:57:01 691.22 4.55 686.67 472.99 235366.24 14:58:01 143.38 0.22 143.16 32.66 19778.30 14:59:01 95.93 0.23 95.70 15.60 18054.72 15:00:01 15.18 0.05 15.13 10.80 321.28 15:01:01 70.04 1.80 68.24 93.97 2456.11 Average: 197.83 4.94 192.89 485.53 58536.20 14:55:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 14:56:01 28616684 31565372 4322536 13.12 83180 3150268 2494316 7.34 1037832 2930684 1324056 14:57:01 24079988 30397324 8859232 26.90 158484 6266296 7057264 20.76 2421204 5816792 908 14:58:01 22814544 29660452 10124676 30.74 181100 6728568 8251508 24.28 3270380 6167192 29992 14:59:01 22604888 29557656 10334332 31.37 196020 6809912 8385752 24.67 3416276 6219948 596 15:00:01 22812732 29723028 10126488 30.74 196244 6772556 7442444 21.90 3268588 6173764 116 15:01:01 24897560 31597092 8041660 24.41 198020 6549212 1629636 4.79 1444896 5975372 11188 Average: 24304399 30416821 8634821 26.21 168841 6046135 5876820 17.29 2476529 5547292 227809 14:55:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 14:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:56:01 ens3 764.51 505.37 15214.25 46.63 0.00 0.00 0.00 0.00 14:56:01 lo 11.63 11.63 1.10 1.10 0.00 0.00 0.00 0.00 14:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:57:01 veth8925360 0.00 0.23 0.00 0.01 0.00 0.00 0.00 0.00 14:57:01 br-1f72558afd59 36.41 44.99 2.29 317.54 0.00 0.00 0.00 0.00 14:57:01 vetha8ec03b 1.52 1.52 0.16 0.16 0.00 0.00 0.00 0.00 14:58:01 docker0 82.57 104.08 4.47 1053.84 0.00 0.00 0.00 0.00 14:58:01 veth81cc51b 82.57 104.20 5.60 1053.85 0.00 0.00 0.00 0.09 14:58:01 veth8925360 0.45 0.50 0.05 1.00 0.00 0.00 0.00 0.00 14:58:01 br-1f72558afd59 0.48 0.42 0.03 0.03 0.00 0.00 0.00 0.00 14:59:01 docker0 39.84 57.66 3.46 292.92 0.00 0.00 0.00 0.00 14:59:01 veth8925360 0.52 0.63 0.05 1.27 0.00 0.00 0.00 0.00 14:59:01 br-1f72558afd59 0.45 0.15 0.02 0.01 0.00 0.00 0.00 0.00 14:59:01 vetha8ec03b 34.94 27.48 3.79 4.06 0.00 0.00 0.00 0.00 15:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:00:01 br-1f72558afd59 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:00:01 vetha8ec03b 14.20 9.67 1.11 1.41 0.00 0.00 0.00 0.00 15:00:01 vethff37da4 12.73 17.06 2.21 1.70 0.00 0.00 0.00 0.00 15:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:01:01 ens3 1965.93 1252.45 36531.61 189.95 0.00 0.00 0.00 0.00 15:01:01 lo 27.02 27.02 2.43 2.43 0.00 0.00 0.00 0.00 Average: docker0 20.40 26.96 1.32 224.45 0.00 0.00 0.00 0.00 Average: ens3 272.08 171.29 5947.31 20.20 0.00 0.00 0.00 0.00 Average: lo 3.84 3.84 0.35 0.35 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20904) 06/13/25 _x86_64_ (8 CPU) 14:54:20 LINUX RESTART (8 CPU) 14:55:01 CPU %user %nice %system %iowait %steal %idle 14:56:01 all 10.39 0.00 1.54 1.81 0.04 86.21 14:56:01 0 4.33 0.00 1.34 0.28 0.02 94.03 14:56:01 1 8.01 0.00 1.39 0.84 0.03 89.74 14:56:01 2 3.31 0.00 1.20 3.56 0.07 91.86 14:56:01 3 26.81 0.00 2.39 1.27 0.08 69.44 14:56:01 4 6.01 0.00 1.22 0.17 0.02 92.59 14:56:01 5 8.52 0.00 1.56 1.54 0.05 88.33 14:56:01 6 24.15 0.00 2.33 0.45 0.03 73.03 14:56:01 7 2.04 0.00 0.95 6.35 0.02 90.64 14:57:01 all 24.62 0.00 7.41 8.25 0.10 59.61 14:57:01 0 31.29 0.00 7.51 3.27 0.10 57.83 14:57:01 1 25.08 0.00 7.77 0.85 0.08 66.21 14:57:01 2 19.10 0.00 7.95 4.93 0.15 67.87 14:57:01 3 21.14 0.00 6.70 21.20 0.08 50.87 14:57:01 4 27.14 0.00 6.12 2.81 0.09 63.85 14:57:01 5 25.87 0.00 6.84 2.42 0.08 64.79 14:57:01 6 20.62 0.00 9.00 17.94 0.10 52.34 14:57:01 7 26.77 0.00 7.41 12.57 0.10 53.15 14:58:01 all 17.85 0.00 2.15 0.65 0.08 79.27 14:58:01 0 21.28 0.00 2.05 0.02 0.07 76.58 14:58:01 1 21.73 0.00 2.54 0.37 0.08 75.27 14:58:01 2 13.59 0.00 1.81 0.07 0.08 84.46 14:58:01 3 14.05 0.00 2.17 1.48 0.10 82.20 14:58:01 4 13.67 0.00 1.63 0.62 0.08 84.00 14:58:01 5 25.05 0.00 2.26 0.89 0.07 71.73 14:58:01 6 15.53 0.00 2.83 1.71 0.08 79.85 14:58:01 7 17.89 0.00 1.87 0.03 0.07 80.14 14:59:01 all 9.06 0.00 1.79 0.50 0.06 88.59 14:59:01 0 8.93 0.00 1.54 0.07 0.07 89.40 14:59:01 1 6.08 0.00 2.12 0.08 0.05 91.66 14:59:01 2 8.66 0.00 1.58 0.07 0.07 89.62 14:59:01 3 10.37 0.00 2.15 0.05 0.07 87.36 14:59:01 4 7.92 0.00 1.86 1.26 0.08 88.88 14:59:01 5 13.07 0.00 2.11 2.33 0.05 82.45 14:59:01 6 10.37 0.00 1.76 0.08 0.08 87.71 14:59:01 7 7.10 0.00 1.27 0.03 0.05 91.55 15:00:01 all 1.89 0.00 0.50 0.03 0.04 97.54 15:00:01 0 1.03 0.00 0.53 0.03 0.03 98.36 15:00:01 1 1.72 0.00 0.67 0.02 0.03 97.56 15:00:01 2 2.32 0.00 0.63 0.02 0.05 96.98 15:00:01 3 3.04 0.00 0.38 0.00 0.05 96.53 15:00:01 4 1.64 0.00 0.43 0.10 0.05 97.78 15:00:01 5 2.15 0.00 0.48 0.02 0.07 97.28 15:00:01 6 1.90 0.00 0.38 0.02 0.05 97.65 15:00:01 7 1.33 0.00 0.42 0.05 0.02 98.18 15:01:01 all 6.23 0.00 0.68 0.18 0.03 92.88 15:01:01 0 3.57 0.00 0.72 0.07 0.02 95.63 15:01:01 1 0.94 0.00 0.58 0.05 0.03 98.40 15:01:01 2 1.17 0.00 0.40 0.02 0.02 98.40 15:01:01 3 9.26 0.00 0.83 0.03 0.03 89.84 15:01:01 4 13.30 0.00 0.73 0.07 0.03 85.87 15:01:01 5 7.47 0.00 0.65 0.17 0.02 91.69 15:01:01 6 10.57 0.00 0.89 0.08 0.05 88.41 15:01:01 7 3.54 0.00 0.63 0.93 0.03 94.86 Average: all 11.64 0.00 2.33 1.89 0.06 84.08 Average: 0 11.69 0.00 2.27 0.62 0.05 85.38 Average: 1 10.56 0.00 2.50 0.37 0.05 86.52 Average: 2 8.01 0.00 2.25 1.44 0.07 88.23 Average: 3 14.06 0.00 2.42 3.96 0.07 79.49 Average: 4 11.56 0.00 1.98 0.83 0.06 85.57 Average: 5 13.66 0.00 2.31 1.22 0.06 82.76 Average: 6 13.84 0.00 2.85 3.35 0.07 79.89 Average: 7 9.74 0.00 2.08 3.31 0.05 84.82