Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/141338 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-22113 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-t7Nigcf02A8W/agent.2091 SSH_AGENT_PID=2093 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp@tmp/private_key_9035718241125720744.key (/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp@tmp/private_key_9035718241125720744.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/38/141338/1 # timeout=30 > git rev-parse 9f75949931e109c1f8e6d9171342571c37f1f409^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision 9f75949931e109c1f8e6d9171342571c37f1f409 (refs/changes/38/141338/1) > git config core.sparsecheckout # timeout=10 > git checkout -f 9f75949931e109c1f8e6d9171342571c37f1f409 # timeout=30 Commit message: "Fix CSIT Helm kafka installation" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk ed38a50541249063daf2cfb00b312fb173adeace # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins3056054196844606920.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-ah7R lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ah7R/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-ah7R/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.38 botocore==1.38.38 bs4==0.0.2 cachetools==5.5.2 certifi==2025.6.15 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.3.0 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/sh /tmp/jenkins17578428080587375763.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/sh -xe /tmp/jenkins12865957653742012947.sh + /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/run-project-csit.sh xacml-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.2M 100 60.2M 0 0 72.8M 0 --:--:-- --:--:-- --:--:-- 72.8M Setting project configuration for: xacml-pdp Configuring docker compose... Starting xacml-pdp using postgres + Grafana/Prometheus xacml-pdp Pulling postgres Pulling grafana Pulling prometheus Pulling api Pulling pap Pulling policy-db-migrator Pulling zookeeper Pulling kafka Pulling da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 795b910b71c0 Pulling fs layer d1bdb495a7aa Pulling fs layer 0444d3911dbb Pulling fs layer b801adf990e2 Pulling fs layer d1bdb495a7aa Waiting 0444d3911dbb Waiting b801adf990e2 Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer e5d7009d9e55 Waiting 1ec5fb03eaee Waiting d3165a332ae3 Waiting c124ba1a8b26 Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 5e06c6bed798 Waiting 684be6598fc9 Waiting 0d92cad902ba Waiting dcc0c3b2850c Waiting eb7cda286a15 Waiting da9db072f522 Pulling fs layer 110a13bd01fb Pulling fs layer 12cf1ed9c784 Pulling fs layer d4108afce2f7 Pulling fs layer 07255172bfd8 Pulling fs layer 22c948928e79 Pulling fs layer e92d65bf8445 Pulling fs layer 7910fddefabc Pulling fs layer d4108afce2f7 Waiting 22c948928e79 Waiting 07255172bfd8 Waiting 110a13bd01fb Waiting 12cf1ed9c784 Waiting 7910fddefabc Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB 795b910b71c0 Downloading [> ] 31.67kB/2.323MB f18232174bc9 Pulling fs layer 9183b65e90ee Pulling fs layer 3f8d5c908dcc Pulling fs layer 30bb92ff0608 Pulling fs layer 807a2e881ecd Pulling fs layer 4a4d0948b0bf Pulling fs layer 04f6155c873d Pulling fs layer 85dde7dceb0a Pulling fs layer 7009d5001b77 Pulling fs layer 538deb30e80c Pulling fs layer 4a4d0948b0bf Waiting 04f6155c873d Waiting 85dde7dceb0a Waiting 7009d5001b77 Waiting 3f8d5c908dcc Waiting f18232174bc9 Waiting 538deb30e80c Waiting 30bb92ff0608 Waiting 807a2e881ecd Waiting 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer c0c90eeb8aca Waiting 5cfb27c10ea5 Waiting 40a5eed61bb0 Waiting 46baca71a4ef Waiting 1e017ebebdbd Waiting b0e0ef7895f4 Waiting 82bfc142787e Waiting 356f5c2c843b Waiting e040ea11fa10 Waiting 09d5a3f70313 Waiting eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer c955f6e31a04 Pulling fs layer 45fd2fec8a19 Waiting 8f10199ed94b Waiting f963a77d2726 Waiting f3a82e9f1761 Waiting eca0188f477e Waiting e444bcd4d577 Waiting 79161a3f5362 Waiting c955f6e31a04 Waiting 9c266ba63f51 Waiting 10f05dd8b1db Waiting 71a9f6a9ab4d Waiting da3ed5db7103 Waiting 41dac8b43ba6 Waiting 2e8a7df9c2ee Waiting 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer 9fa9226be034 Waiting 1617e25568b2 Waiting f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 408012a7b118 Waiting 44986281b8b9 Waiting 6ac0e4adf315 Waiting f3b09c502777 Waiting bf70c5107ab5 Waiting 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer 1ccde423731d Waiting 7df673c7455d Waiting 7221d93db8a9 Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer 2d429b9e73a6 Waiting e73cb4a42719 Pulling fs layer a83b68436f09 Pulling fs layer 787d6bee9571 Pulling fs layer 46eab5b44a35 Waiting 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer c4d302cc468d Waiting 7e568a0dc8fb Pulling fs layer 01e0882c90d9 Waiting 531ee2cf3c0c Waiting e73cb4a42719 Waiting 12c5c803443f Waiting e27c75a98748 Waiting a83b68436f09 Waiting 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 4b82842ab819 Waiting 13ff0988aaea Waiting 7e568a0dc8fb Waiting da9db072f522 Downloading [================================================> ] 3.489MB/3.624MB da9db072f522 Downloading [================================================> ] 3.489MB/3.624MB da9db072f522 Downloading [================================================> ] 3.489MB/3.624MB da9db072f522 Downloading [================================================> ] 3.489MB/3.624MB da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB 795b910b71c0 Downloading [==================================================>] 2.323MB/2.323MB 795b910b71c0 Verifying Checksum 795b910b71c0 Download complete 0444d3911dbb Downloading [==================================================>] 1.2kB/1.2kB 0444d3911dbb Verifying Checksum 0444d3911dbb Download complete 96e38c8865ba Downloading [=> ] 1.621MB/71.91MB 96e38c8865ba Downloading [=> ] 1.621MB/71.91MB 96e38c8865ba Downloading [=> ] 1.621MB/71.91MB b801adf990e2 Downloading [==================================================>] 1.17kB/1.17kB b801adf990e2 Verifying Checksum b801adf990e2 Download complete d1bdb495a7aa Downloading [> ] 539.6kB/58.78MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Verifying Checksum e5d7009d9e55 Download complete 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 1ec5fb03eaee Verifying Checksum 1ec5fb03eaee Download complete da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Download complete 96e38c8865ba Downloading [==> ] 3.784MB/71.91MB 96e38c8865ba Downloading [==> ] 3.784MB/71.91MB 96e38c8865ba Downloading [==> ] 3.784MB/71.91MB d1bdb495a7aa Downloading [==> ] 2.702MB/58.78MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 96e38c8865ba Downloading [====> ] 6.487MB/71.91MB 96e38c8865ba Downloading [====> ] 6.487MB/71.91MB 96e38c8865ba Downloading [====> ] 6.487MB/71.91MB d1bdb495a7aa Downloading [=====> ] 5.946MB/58.78MB c124ba1a8b26 Downloading [=> ] 2.702MB/91.87MB 96e38c8865ba Downloading [=======> ] 10.81MB/71.91MB 96e38c8865ba Downloading [=======> ] 10.81MB/71.91MB 96e38c8865ba Downloading [=======> ] 10.81MB/71.91MB d1bdb495a7aa Downloading [=========> ] 10.81MB/58.78MB c124ba1a8b26 Downloading [==> ] 5.406MB/91.87MB 96e38c8865ba Downloading [===========> ] 16.22MB/71.91MB 96e38c8865ba Downloading [===========> ] 16.22MB/71.91MB 96e38c8865ba Downloading [===========> ] 16.22MB/71.91MB d1bdb495a7aa Downloading [=============> ] 15.68MB/58.78MB c124ba1a8b26 Downloading [====> ] 8.65MB/91.87MB 96e38c8865ba Downloading [================> ] 23.25MB/71.91MB 96e38c8865ba Downloading [================> ] 23.25MB/71.91MB 96e38c8865ba Downloading [================> ] 23.25MB/71.91MB d1bdb495a7aa Downloading [===================> ] 23.25MB/58.78MB c124ba1a8b26 Downloading [======> ] 11.35MB/91.87MB 96e38c8865ba Downloading [====================> ] 29.2MB/71.91MB 96e38c8865ba Downloading [====================> ] 29.2MB/71.91MB 96e38c8865ba Downloading [====================> ] 29.2MB/71.91MB d1bdb495a7aa Downloading [========================> ] 29.2MB/58.78MB c124ba1a8b26 Downloading [=========> ] 16.76MB/91.87MB 96e38c8865ba Downloading [===========================> ] 39.47MB/71.91MB 96e38c8865ba Downloading [===========================> ] 39.47MB/71.91MB 96e38c8865ba Downloading [===========================> ] 39.47MB/71.91MB d1bdb495a7aa Downloading [================================> ] 38.39MB/58.78MB c124ba1a8b26 Downloading [============> ] 22.71MB/91.87MB 96e38c8865ba Downloading [====================================> ] 52.98MB/71.91MB 96e38c8865ba Downloading [====================================> ] 52.98MB/71.91MB 96e38c8865ba Downloading [====================================> ] 52.98MB/71.91MB d1bdb495a7aa Downloading [==========================================> ] 50.28MB/58.78MB 96e38c8865ba Downloading [=============================================> ] 65.96MB/71.91MB 96e38c8865ba Downloading [=============================================> ] 65.96MB/71.91MB 96e38c8865ba Downloading [=============================================> ] 65.96MB/71.91MB c124ba1a8b26 Downloading [===============> ] 28.11MB/91.87MB d1bdb495a7aa Verifying Checksum d1bdb495a7aa Download complete 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Verifying Checksum 6394804c2196 Download complete 96e38c8865ba Verifying Checksum 96e38c8865ba Verifying Checksum 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Download complete 96e38c8865ba Download complete 5e06c6bed798 Downloading [==================================================>] 296B/296B 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 684be6598fc9 Verifying Checksum 684be6598fc9 Download complete 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Verifying Checksum eb7cda286a15 Download complete c124ba1a8b26 Downloading [=====================> ] 38.93MB/91.87MB dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 110a13bd01fb Downloading [> ] 539.6kB/71.86MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB c124ba1a8b26 Downloading [============================> ] 52.44MB/91.87MB dcc0c3b2850c Downloading [=> ] 2.162MB/76.12MB 110a13bd01fb Downloading [=======> ] 10.27MB/71.86MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB c124ba1a8b26 Downloading [==================================> ] 63.8MB/91.87MB 110a13bd01fb Downloading [=============> ] 20MB/71.86MB dcc0c3b2850c Downloading [==> ] 3.784MB/76.12MB 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB c124ba1a8b26 Downloading [=========================================> ] 76.23MB/91.87MB 110a13bd01fb Downloading [======================> ] 31.9MB/71.86MB dcc0c3b2850c Downloading [===> ] 5.946MB/76.12MB 96e38c8865ba Extracting [==========> ] 14.48MB/71.91MB 96e38c8865ba Extracting [==========> ] 14.48MB/71.91MB 96e38c8865ba Extracting [==========> ] 14.48MB/71.91MB c124ba1a8b26 Downloading [=================================================> ] 90.83MB/91.87MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete 110a13bd01fb Downloading [==============================> ] 43.25MB/71.86MB 12cf1ed9c784 Downloading [> ] 146.4kB/14.64MB dcc0c3b2850c Downloading [=====> ] 8.109MB/76.12MB 96e38c8865ba Extracting [==============> ] 20.61MB/71.91MB 96e38c8865ba Extracting [==============> ] 20.61MB/71.91MB 96e38c8865ba Extracting [==============> ] 20.61MB/71.91MB 110a13bd01fb Downloading [=========================================> ] 59.47MB/71.86MB 12cf1ed9c784 Downloading [===================> ] 5.602MB/14.64MB dcc0c3b2850c Downloading [=======> ] 11.35MB/76.12MB 96e38c8865ba Extracting [==================> ] 27.3MB/71.91MB 96e38c8865ba Extracting [==================> ] 27.3MB/71.91MB 96e38c8865ba Extracting [==================> ] 27.3MB/71.91MB 110a13bd01fb Verifying Checksum 110a13bd01fb Download complete d4108afce2f7 Downloading [==================================================>] 1.073kB/1.073kB d4108afce2f7 Verifying Checksum d4108afce2f7 Download complete 07255172bfd8 Downloading [============================> ] 3.003kB/5.24kB 07255172bfd8 Downloading [==================================================>] 5.24kB/5.24kB 07255172bfd8 Verifying Checksum 07255172bfd8 Download complete 12cf1ed9c784 Downloading [==========================================> ] 12.53MB/14.64MB 22c948928e79 Downloading [==================================================>] 1.031kB/1.031kB 22c948928e79 Verifying Checksum 22c948928e79 Download complete 96e38c8865ba Extracting [======================> ] 31.75MB/71.91MB 96e38c8865ba Extracting [======================> ] 31.75MB/71.91MB 96e38c8865ba Extracting [======================> ] 31.75MB/71.91MB dcc0c3b2850c Downloading [===========> ] 17.3MB/76.12MB 12cf1ed9c784 Verifying Checksum 12cf1ed9c784 Download complete e92d65bf8445 Downloading [==================================================>] 1.034kB/1.034kB e92d65bf8445 Verifying Checksum e92d65bf8445 Download complete 110a13bd01fb Extracting [> ] 557.1kB/71.86MB 7910fddefabc Downloading [=======> ] 3.002kB/19.51kB 7910fddefabc Downloading [==================================================>] 19.51kB/19.51kB 7910fddefabc Download complete f18232174bc9 Downloading [> ] 48.06kB/3.642MB 9183b65e90ee Download complete 3f8d5c908dcc Downloading [> ] 48.06kB/3.524MB dcc0c3b2850c Downloading [================> ] 24.87MB/76.12MB 96e38c8865ba Extracting [=========================> ] 37.32MB/71.91MB 96e38c8865ba Extracting [=========================> ] 37.32MB/71.91MB 96e38c8865ba Extracting [=========================> ] 37.32MB/71.91MB 110a13bd01fb Extracting [===> ] 4.456MB/71.86MB f18232174bc9 Downloading [====================> ] 1.523MB/3.642MB 3f8d5c908dcc Downloading [====================> ] 1.473MB/3.524MB 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB dcc0c3b2850c Downloading [====================> ] 30.82MB/76.12MB 110a13bd01fb Extracting [======> ] 8.913MB/71.86MB f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB 30bb92ff0608 Downloading [> ] 97.22kB/8.735MB 3f8d5c908dcc Downloading [=========================================> ] 2.948MB/3.524MB dcc0c3b2850c Downloading [========================> ] 37.85MB/76.12MB 96e38c8865ba Extracting [=================================> ] 48.46MB/71.91MB 96e38c8865ba Extracting [=================================> ] 48.46MB/71.91MB 96e38c8865ba Extracting [=================================> ] 48.46MB/71.91MB 3f8d5c908dcc Downloading [==================================================>] 3.524MB/3.524MB 3f8d5c908dcc Verifying Checksum 3f8d5c908dcc Download complete 110a13bd01fb Extracting [========> ] 12.81MB/71.86MB f18232174bc9 Extracting [=========> ] 720.9kB/3.642MB 807a2e881ecd Downloading [==> ] 3.01kB/58.07kB 807a2e881ecd Downloading [==================================================>] 58.07kB/58.07kB 807a2e881ecd Verifying Checksum 807a2e881ecd Download complete 30bb92ff0608 Downloading [=======> ] 1.277MB/8.735MB dcc0c3b2850c Downloading [=============================> ] 44.33MB/76.12MB 4a4d0948b0bf Downloading [=====> ] 3.01kB/27.78kB 4a4d0948b0bf Download complete 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 110a13bd01fb Extracting [===========> ] 16.15MB/71.86MB 04f6155c873d Downloading [> ] 539.6kB/107.3MB 30bb92ff0608 Downloading [==============> ] 2.457MB/8.735MB dcc0c3b2850c Downloading [=================================> ] 51.36MB/76.12MB 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 110a13bd01fb Extracting [==============> ] 21.17MB/71.86MB f18232174bc9 Pull complete 9183b65e90ee Extracting [==================================================>] 141B/141B 9183b65e90ee Extracting [==================================================>] 141B/141B 04f6155c873d Downloading [=> ] 2.162MB/107.3MB dcc0c3b2850c Downloading [======================================> ] 57.85MB/76.12MB 30bb92ff0608 Downloading [=====================> ] 3.833MB/8.735MB 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB 110a13bd01fb Extracting [==================> ] 26.18MB/71.86MB 30bb92ff0608 Downloading [=============================> ] 5.209MB/8.735MB 04f6155c873d Downloading [=> ] 3.784MB/107.3MB dcc0c3b2850c Downloading [==========================================> ] 64.34MB/76.12MB 96e38c8865ba Extracting [===============================================> ] 68.52MB/71.91MB 96e38c8865ba Extracting [===============================================> ] 68.52MB/71.91MB 96e38c8865ba Extracting [===============================================> ] 68.52MB/71.91MB 9183b65e90ee Pull complete 110a13bd01fb Extracting [======================> ] 31.75MB/71.86MB 3f8d5c908dcc Extracting [> ] 65.54kB/3.524MB 30bb92ff0608 Downloading [======================================> ] 6.684MB/8.735MB dcc0c3b2850c Downloading [==============================================> ] 70.83MB/76.12MB 04f6155c873d Downloading [==> ] 5.946MB/107.3MB 3f8d5c908dcc Extracting [========> ] 589.8kB/3.524MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 110a13bd01fb Extracting [=========================> ] 36.21MB/71.86MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 30bb92ff0608 Downloading [===============================================> ] 8.256MB/8.735MB 30bb92ff0608 Verifying Checksum 30bb92ff0608 Download complete 3f8d5c908dcc Extracting [===========================================> ] 3.08MB/3.524MB 04f6155c873d Downloading [===> ] 8.109MB/107.3MB 110a13bd01fb Extracting [============================> ] 41.22MB/71.86MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete 96e38c8865ba Pull complete 795b910b71c0 Extracting [> ] 32.77kB/2.323MB e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 85dde7dceb0a Downloading [> ] 539.6kB/63.48MB 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 7009d5001b77 Downloading [============> ] 3.01kB/11.92kB 7009d5001b77 Downloading [==================================================>] 11.92kB/11.92kB 7009d5001b77 Verifying Checksum 7009d5001b77 Download complete 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 538deb30e80c Downloading [==================================================>] 1.225kB/1.225kB 538deb30e80c Verifying Checksum 538deb30e80c Download complete 04f6155c873d Downloading [=====> ] 11.35MB/107.3MB 795b910b71c0 Extracting [======================================> ] 1.769MB/2.323MB 795b910b71c0 Extracting [==================================================>] 2.323MB/2.323MB 110a13bd01fb Extracting [==============================> ] 44.01MB/71.86MB 85dde7dceb0a Downloading [==> ] 3.784MB/63.48MB 3f8d5c908dcc Pull complete 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 795b910b71c0 Pull complete 30bb92ff0608 Extracting [> ] 98.3kB/8.735MB 5e06c6bed798 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB e5d7009d9e55 Pull complete 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 04f6155c873d Downloading [======> ] 13.52MB/107.3MB 110a13bd01fb Extracting [=================================> ] 47.91MB/71.86MB 85dde7dceb0a Downloading [=====> ] 7.028MB/63.48MB 30bb92ff0608 Extracting [========> ] 1.475MB/8.735MB d1bdb495a7aa Extracting [> ] 557.1kB/58.78MB 1e017ebebdbd Downloading [==> ] 1.506MB/37.19MB 04f6155c873d Downloading [=======> ] 16.22MB/107.3MB 110a13bd01fb Extracting [===================================> ] 50.69MB/71.86MB 85dde7dceb0a Downloading [========> ] 10.27MB/63.48MB 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 30bb92ff0608 Extracting [=====================================> ] 6.488MB/8.735MB d1bdb495a7aa Extracting [=====> ] 6.685MB/58.78MB 1ec5fb03eaee Pull complete d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 1e017ebebdbd Downloading [===> ] 2.637MB/37.19MB 110a13bd01fb Extracting [=====================================> ] 53.48MB/71.86MB 04f6155c873d Downloading [========> ] 18.92MB/107.3MB 30bb92ff0608 Extracting [==================================================>] 8.735MB/8.735MB 85dde7dceb0a Downloading [===========> ] 14.6MB/63.48MB d1bdb495a7aa Extracting [=============> ] 15.6MB/58.78MB 30bb92ff0608 Pull complete 807a2e881ecd Extracting [============================> ] 32.77kB/58.07kB 0d92cad902ba Pull complete 807a2e881ecd Extracting [==================================================>] 58.07kB/58.07kB 1e017ebebdbd Downloading [=====> ] 3.767MB/37.19MB 110a13bd01fb Extracting [========================================> ] 57.93MB/71.86MB 04f6155c873d Downloading [==========> ] 21.63MB/107.3MB d3165a332ae3 Pull complete 85dde7dceb0a Downloading [==============> ] 18.92MB/63.48MB d1bdb495a7aa Extracting [=====================> ] 25.07MB/58.78MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 110a13bd01fb Extracting [============================================> ] 63.5MB/71.86MB 1e017ebebdbd Downloading [=======> ] 5.275MB/37.19MB 04f6155c873d Downloading [============> ] 25.95MB/107.3MB 85dde7dceb0a Downloading [==================> ] 23.79MB/63.48MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB dcc0c3b2850c Extracting [===> ] 5.014MB/76.12MB d1bdb495a7aa Extracting [==========================> ] 31.2MB/58.78MB 110a13bd01fb Extracting [==============================================> ] 66.85MB/71.86MB 1e017ebebdbd Downloading [=========> ] 6.782MB/37.19MB 04f6155c873d Downloading [=============> ] 29.74MB/107.3MB 85dde7dceb0a Downloading [=========================> ] 31.9MB/63.48MB c124ba1a8b26 Extracting [======> ] 12.26MB/91.87MB dcc0c3b2850c Extracting [========> ] 13.37MB/76.12MB d1bdb495a7aa Extracting [=================================> ] 39.55MB/58.78MB 110a13bd01fb Extracting [================================================> ] 69.63MB/71.86MB 1e017ebebdbd Downloading [=============> ] 9.797MB/37.19MB 04f6155c873d Downloading [==================> ] 38.93MB/107.3MB 807a2e881ecd Pull complete 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 85dde7dceb0a Downloading [==================================> ] 43.25MB/63.48MB c124ba1a8b26 Extracting [============> ] 22.28MB/91.87MB d1bdb495a7aa Extracting [========================================> ] 47.35MB/58.78MB dcc0c3b2850c Extracting [==============> ] 22.28MB/76.12MB 110a13bd01fb Extracting [==================================================>] 71.86MB/71.86MB 110a13bd01fb Extracting [==================================================>] 71.86MB/71.86MB 1e017ebebdbd Downloading [=================> ] 13.19MB/37.19MB 04f6155c873d Downloading [=====================> ] 45.96MB/107.3MB 85dde7dceb0a Downloading [==========================================> ] 54.07MB/63.48MB c124ba1a8b26 Extracting [================> ] 29.52MB/91.87MB d1bdb495a7aa Extracting [================================================> ] 56.82MB/58.78MB dcc0c3b2850c Extracting [==================> ] 28.41MB/76.12MB 1e017ebebdbd Downloading [=====================> ] 16.2MB/37.19MB 110a13bd01fb Pull complete 04f6155c873d Downloading [========================> ] 52.98MB/107.3MB d1bdb495a7aa Extracting [==================================================>] 58.78MB/58.78MB 12cf1ed9c784 Extracting [> ] 163.8kB/14.64MB 85dde7dceb0a Downloading [================================================> ] 61.09MB/63.48MB 4a4d0948b0bf Pull complete c124ba1a8b26 Extracting [===================> ] 35.09MB/91.87MB d1bdb495a7aa Pull complete 0444d3911dbb Extracting [==================================================>] 1.2kB/1.2kB 85dde7dceb0a Verifying Checksum 85dde7dceb0a Download complete dcc0c3b2850c Extracting [=====================> ] 32.87MB/76.12MB 04f6155c873d Downloading [==========================> ] 56.23MB/107.3MB 1e017ebebdbd Downloading [========================> ] 18.46MB/37.19MB 12cf1ed9c784 Extracting [====> ] 1.311MB/14.64MB c124ba1a8b26 Extracting [========================> ] 44.56MB/91.87MB dcc0c3b2850c Extracting [============================> ] 42.89MB/76.12MB 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 04f6155c873d Downloading [================================> ] 69.2MB/107.3MB 12cf1ed9c784 Extracting [================> ] 4.751MB/14.64MB 1e017ebebdbd Downloading [================================> ] 24.49MB/37.19MB c124ba1a8b26 Extracting [============================> ] 52.92MB/91.87MB 0444d3911dbb Pull complete b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB dcc0c3b2850c Extracting [=================================> ] 51.25MB/76.12MB 55f2b468da67 Downloading [=> ] 6.487MB/257.9MB 04f6155c873d Downloading [===================================> ] 76.77MB/107.3MB 12cf1ed9c784 Extracting [====================> ] 6.062MB/14.64MB 1e017ebebdbd Downloading [======================================> ] 28.64MB/37.19MB c124ba1a8b26 Extracting [==================================> ] 64.06MB/91.87MB dcc0c3b2850c Extracting [=========================================> ] 62.95MB/76.12MB 55f2b468da67 Downloading [==> ] 12.98MB/257.9MB 04f6155c873d Downloading [=======================================> ] 85.43MB/107.3MB 12cf1ed9c784 Extracting [===========================> ] 8.192MB/14.64MB b801adf990e2 Pull complete 1e017ebebdbd Downloading [=============================================> ] 33.54MB/37.19MB xacml-pdp Pulled c124ba1a8b26 Extracting [========================================> ] 74.09MB/91.87MB dcc0c3b2850c Extracting [================================================> ] 74.09MB/76.12MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete 55f2b468da67 Downloading [====> ] 21.09MB/257.9MB 04f6155c873d Downloading [============================================> ] 94.62MB/107.3MB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 12cf1ed9c784 Extracting [==============================> ] 8.847MB/14.64MB 82bfc142787e Downloading [> ] 97.22kB/8.613MB dcc0c3b2850c Pull complete eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB c124ba1a8b26 Extracting [============================================> ] 81.33MB/91.87MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 55f2b468da67 Downloading [=====> ] 30.82MB/257.9MB 04f6155c873d Downloading [=================================================> ] 106MB/107.3MB 12cf1ed9c784 Extracting [======================================> ] 11.3MB/14.64MB 04f6155c873d Verifying Checksum 04f6155c873d Download complete 82bfc142787e Downloading [============> ] 2.162MB/8.613MB c124ba1a8b26 Extracting [================================================> ] 89.69MB/91.87MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Download complete 1e017ebebdbd Extracting [======> ] 5.112MB/37.19MB c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 55f2b468da67 Downloading [=======> ] 38.93MB/257.9MB 12cf1ed9c784 Extracting [==========================================> ] 12.45MB/14.64MB eb7cda286a15 Pull complete 82bfc142787e Downloading [=============================> ] 5.111MB/8.613MB c124ba1a8b26 Pull complete 12cf1ed9c784 Extracting [==================================================>] 14.64MB/14.64MB 04f6155c873d Extracting [> ] 557.1kB/107.3MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 1e017ebebdbd Extracting [========> ] 6.685MB/37.19MB 55f2b468da67 Downloading [=========> ] 47.04MB/257.9MB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB api Pulled 12cf1ed9c784 Pull complete d4108afce2f7 Extracting [==================================================>] 1.073kB/1.073kB d4108afce2f7 Extracting [==================================================>] 1.073kB/1.073kB 82bfc142787e Downloading [=============================================> ] 7.765MB/8.613MB b0e0ef7895f4 Downloading [=====> ] 4.144MB/37.01MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete 1e017ebebdbd Extracting [=============> ] 9.83MB/37.19MB 04f6155c873d Extracting [=> ] 3.342MB/107.3MB c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum 55f2b468da67 Downloading [==========> ] 54.07MB/257.9MB c0c90eeb8aca Download complete 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete d4108afce2f7 Pull complete 07255172bfd8 Extracting [==================================================>] 5.24kB/5.24kB 07255172bfd8 Extracting [==================================================>] 5.24kB/5.24kB 40a5eed61bb0 Downloading [==================================================>] 98B/98B b0e0ef7895f4 Downloading [================> ] 12.06MB/37.01MB 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete 6394804c2196 Pull complete e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete 1e017ebebdbd Extracting [===================> ] 14.16MB/37.19MB pap Pulled 04f6155c873d Extracting [==> ] 6.128MB/107.3MB 55f2b468da67 Downloading [============> ] 64.88MB/257.9MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB b0e0ef7895f4 Downloading [===============================> ] 23.36MB/37.01MB 1e017ebebdbd Extracting [========================> ] 18.48MB/37.19MB 55f2b468da67 Downloading [===============> ] 81.1MB/257.9MB 07255172bfd8 Pull complete 22c948928e79 Extracting [==================================================>] 1.031kB/1.031kB 22c948928e79 Extracting [==================================================>] 1.031kB/1.031kB 04f6155c873d Extracting [====> ] 10.58MB/107.3MB 09d5a3f70313 Downloading [===> ] 8.65MB/109.2MB b0e0ef7895f4 Downloading [=========================================> ] 30.9MB/37.01MB 1e017ebebdbd Extracting [===============================> ] 23.2MB/37.19MB 55f2b468da67 Downloading [==================> ] 96.78MB/257.9MB 04f6155c873d Extracting [======> ] 13.93MB/107.3MB 09d5a3f70313 Downloading [=========> ] 21.09MB/109.2MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 356f5c2c843b Download complete 1e017ebebdbd Extracting [===================================> ] 26.35MB/37.19MB 55f2b468da67 Downloading [====================> ] 108.1MB/257.9MB 22c948928e79 Pull complete 09d5a3f70313 Downloading [==============> ] 31.36MB/109.2MB 04f6155c873d Extracting [=======> ] 16.71MB/107.3MB 1e017ebebdbd Extracting [==========================================> ] 31.85MB/37.19MB 55f2b468da67 Downloading [=======================> ] 120.6MB/257.9MB eca0188f477e Downloading [> ] 375.7kB/37.17MB 09d5a3f70313 Downloading [====================> ] 44.33MB/109.2MB e92d65bf8445 Extracting [==================================================>] 1.034kB/1.034kB e92d65bf8445 Extracting [==================================================>] 1.034kB/1.034kB 04f6155c873d Extracting [========> ] 17.83MB/107.3MB 55f2b468da67 Downloading [=========================> ] 131.4MB/257.9MB eca0188f477e Downloading [=====> ] 3.767MB/37.17MB 1e017ebebdbd Extracting [=============================================> ] 34.21MB/37.19MB 09d5a3f70313 Downloading [=========================> ] 56.23MB/109.2MB 55f2b468da67 Downloading [==========================> ] 136.8MB/257.9MB eca0188f477e Downloading [==========> ] 7.536MB/37.17MB 04f6155c873d Extracting [=========> ] 21.17MB/107.3MB e92d65bf8445 Pull complete 7910fddefabc Extracting [==================================================>] 19.51kB/19.51kB 1e017ebebdbd Extracting [================================================> ] 36.18MB/37.19MB 7910fddefabc Extracting [==================================================>] 19.51kB/19.51kB 09d5a3f70313 Downloading [================================> ] 70.29MB/109.2MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 55f2b468da67 Downloading [============================> ] 145.4MB/257.9MB eca0188f477e Downloading [================> ] 12.06MB/37.17MB 04f6155c873d Extracting [===========> ] 25.07MB/107.3MB 09d5a3f70313 Downloading [=====================================> ] 82.18MB/109.2MB 1e017ebebdbd Pull complete 7910fddefabc Pull complete 55f2b468da67 Downloading [=============================> ] 154.6MB/257.9MB policy-db-migrator Pulled eca0188f477e Downloading [======================> ] 16.58MB/37.17MB 04f6155c873d Extracting [=============> ] 28.97MB/107.3MB 09d5a3f70313 Downloading [============================================> ] 96.78MB/109.2MB 55f2b468da67 Downloading [================================> ] 166.5MB/257.9MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete eca0188f477e Downloading [==============================> ] 22.61MB/37.17MB 04f6155c873d Extracting [================> ] 34.54MB/107.3MB e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete 55f2b468da67 Downloading [===================================> ] 180.6MB/257.9MB eca0188f477e Downloading [========================================> ] 30.15MB/37.17MB eabd8714fec9 Downloading [> ] 539.6kB/375MB 04f6155c873d Extracting [==================> ] 39.55MB/107.3MB 55f2b468da67 Downloading [=====================================> ] 193.6MB/257.9MB eabd8714fec9 Downloading [=> ] 9.19MB/375MB eca0188f477e Downloading [===============================================> ] 35.04MB/37.17MB 04f6155c873d Extracting [====================> ] 44.56MB/107.3MB eca0188f477e Verifying Checksum eca0188f477e Download complete 55f2b468da67 Downloading [=======================================> ] 203.8MB/257.9MB 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete eabd8714fec9 Downloading [===> ] 23.25MB/375MB 04f6155c873d Extracting [=======================> ] 50.14MB/107.3MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB eca0188f477e Extracting [> ] 393.2kB/37.17MB 55f2b468da67 Downloading [=========================================> ] 214.6MB/257.9MB eabd8714fec9 Downloading [====> ] 36.22MB/375MB 04f6155c873d Extracting [=========================> ] 54.59MB/107.3MB 8f10199ed94b Downloading [=================> ] 3.046MB/8.768MB eca0188f477e Extracting [=====> ] 4.325MB/37.17MB 55f2b468da67 Downloading [===========================================> ] 224.4MB/257.9MB eabd8714fec9 Downloading [======> ] 49.2MB/375MB 04f6155c873d Extracting [============================> ] 60.72MB/107.3MB 8f10199ed94b Downloading [==========================================> ] 7.372MB/8.768MB eca0188f477e Extracting [==========> ] 7.471MB/37.17MB 55f2b468da67 Downloading [=============================================> ] 235.7MB/257.9MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete eabd8714fec9 Downloading [========> ] 64.88MB/375MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Verifying Checksum f963a77d2726 Download complete 04f6155c873d Extracting [=============================> ] 63.5MB/107.3MB 55f2b468da67 Downloading [================================================> ] 248.2MB/257.9MB eca0188f477e Extracting [================> ] 12.19MB/37.17MB eabd8714fec9 Downloading [==========> ] 80.02MB/375MB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 04f6155c873d Extracting [===============================> ] 66.85MB/107.3MB 55f2b468da67 Downloading [=================================================> ] 257.4MB/257.9MB 55f2b468da67 Download complete eabd8714fec9 Downloading [=============> ] 97.86MB/375MB eca0188f477e Extracting [========================> ] 18.09MB/37.17MB 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete f3a82e9f1761 Downloading [====> ] 3.669MB/44.41MB 04f6155c873d Extracting [================================> ] 69.07MB/107.3MB eabd8714fec9 Downloading [===============> ] 113.5MB/375MB eca0188f477e Extracting [================================> ] 23.99MB/37.17MB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete f3a82e9f1761 Downloading [=========> ] 8.715MB/44.41MB 04f6155c873d Extracting [=================================> ] 71.86MB/107.3MB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete eabd8714fec9 Downloading [=================> ] 130.3MB/375MB eca0188f477e Extracting [=====================================> ] 27.92MB/37.17MB 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Download complete 55f2b468da67 Extracting [==> ] 10.58MB/257.9MB f3a82e9f1761 Downloading [==============> ] 12.84MB/44.41MB 04f6155c873d Extracting [==================================> ] 74.09MB/107.3MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB eabd8714fec9 Downloading [==================> ] 142.2MB/375MB eca0188f477e Extracting [=========================================> ] 31.06MB/37.17MB 55f2b468da67 Extracting [===> ] 20.05MB/257.9MB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete f3a82e9f1761 Downloading [===================> ] 16.97MB/44.41MB 04f6155c873d Extracting [===================================> ] 76.87MB/107.3MB eabd8714fec9 Downloading [====================> ] 156.8MB/375MB 55f2b468da67 Extracting [====> ] 23.95MB/257.9MB eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB f3a82e9f1761 Downloading [========================> ] 21.56MB/44.41MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 04f6155c873d Extracting [=====================================> ] 80.22MB/107.3MB eabd8714fec9 Downloading [======================> ] 171.4MB/375MB 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB f3a82e9f1761 Downloading [============================> ] 25.69MB/44.41MB da3ed5db7103 Downloading [==> ] 5.406MB/127.4MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB eabd8714fec9 Downloading [=========================> ] 187.6MB/375MB 04f6155c873d Extracting [======================================> ] 83.56MB/107.3MB 55f2b468da67 Extracting [=======> ] 36.21MB/257.9MB eabd8714fec9 Downloading [=========================> ] 193MB/375MB f3a82e9f1761 Downloading [===============================> ] 27.98MB/44.41MB da3ed5db7103 Downloading [===> ] 9.731MB/127.4MB eca0188f477e Pull complete 55f2b468da67 Extracting [=======> ] 37.32MB/257.9MB e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B 04f6155c873d Extracting [=======================================> ] 84.67MB/107.3MB eabd8714fec9 Downloading [===========================> ] 207.1MB/375MB f3a82e9f1761 Downloading [==========================================> ] 37.62MB/44.41MB da3ed5db7103 Downloading [========> ] 22.17MB/127.4MB 55f2b468da67 Extracting [=========> ] 47.91MB/257.9MB 04f6155c873d Extracting [=========================================> ] 89.69MB/107.3MB e444bcd4d577 Pull complete eabd8714fec9 Downloading [=============================> ] 221.1MB/375MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete da3ed5db7103 Downloading [============> ] 31.9MB/127.4MB 55f2b468da67 Extracting [===========> ] 58.49MB/257.9MB 04f6155c873d Extracting [===========================================> ] 94.14MB/107.3MB c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Download complete eabd8714fec9 Downloading [===============================> ] 238.4MB/375MB da3ed5db7103 Downloading [=================> ] 45.42MB/127.4MB 55f2b468da67 Extracting [=============> ] 70.19MB/257.9MB 04f6155c873d Extracting [==============================================> ] 99.16MB/107.3MB 9fa9226be034 Downloading [> ] 15.3kB/783kB eabd8714fec9 Downloading [==================================> ] 255.7MB/375MB da3ed5db7103 Downloading [=======================> ] 60.55MB/127.4MB 55f2b468da67 Extracting [===============> ] 82.44MB/257.9MB 04f6155c873d Extracting [================================================> ] 103.1MB/107.3MB 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB eabd8714fec9 Downloading [====================================> ] 270.3MB/375MB da3ed5db7103 Downloading [==========================> ] 68.12MB/127.4MB 55f2b468da67 Extracting [==================> ] 94.14MB/257.9MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 04f6155c873d Extracting [================================================> ] 104.2MB/107.3MB da3ed5db7103 Downloading [=============================> ] 76.23MB/127.4MB eabd8714fec9 Downloading [=====================================> ] 280.6MB/375MB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Verifying Checksum 1617e25568b2 Download complete 55f2b468da67 Extracting [===================> ] 102.5MB/257.9MB 04f6155c873d Extracting [=================================================> ] 105.3MB/107.3MB 9fa9226be034 Pull complete da3ed5db7103 Downloading [================================> ] 83.8MB/127.4MB 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB eabd8714fec9 Downloading [=======================================> ] 296.3MB/375MB 04f6155c873d Extracting [==================================================>] 107.3MB/107.3MB 55f2b468da67 Extracting [=====================> ] 108.6MB/257.9MB 04f6155c873d Pull complete 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB da3ed5db7103 Downloading [====================================> ] 92.45MB/127.4MB eabd8714fec9 Downloading [=========================================> ] 313MB/375MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 55f2b468da67 Extracting [======================> ] 113.6MB/257.9MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB eabd8714fec9 Downloading [===========================================> ] 325.5MB/375MB da3ed5db7103 Downloading [=======================================> ] 100.6MB/127.4MB 6ac0e4adf315 Downloading [==> ] 3.243MB/62.07MB 55f2b468da67 Extracting [======================> ] 117MB/257.9MB 85dde7dceb0a Extracting [> ] 557.1kB/63.48MB 6ac0e4adf315 Downloading [=====> ] 7.028MB/62.07MB eabd8714fec9 Downloading [=============================================> ] 340.1MB/375MB da3ed5db7103 Downloading [============================================> ] 112.5MB/127.4MB eabd8714fec9 Downloading [=============================================> ] 343.3MB/375MB 55f2b468da67 Extracting [=======================> ] 119.8MB/257.9MB 6ac0e4adf315 Downloading [=======> ] 9.19MB/62.07MB 1617e25568b2 Pull complete da3ed5db7103 Downloading [==============================================> ] 117.3MB/127.4MB 85dde7dceb0a Extracting [> ] 1.114MB/63.48MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete eabd8714fec9 Downloading [===============================================> ] 356.8MB/375MB 6ac0e4adf315 Downloading [============> ] 15.68MB/62.07MB 55f2b468da67 Extracting [========================> ] 124.8MB/257.9MB 85dde7dceb0a Extracting [=> ] 1.671MB/63.48MB f3b09c502777 Downloading [> ] 539.6kB/56.52MB eabd8714fec9 Downloading [=================================================> ] 371.4MB/375MB 55f2b468da67 Extracting [=========================> ] 129.2MB/257.9MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete 85dde7dceb0a Extracting [=> ] 2.228MB/63.48MB 6ac0e4adf315 Downloading [==================> ] 23.25MB/62.07MB f3b09c502777 Downloading [==> ] 3.243MB/56.52MB 55f2b468da67 Extracting [=========================> ] 132MB/257.9MB 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Verifying Checksum 408012a7b118 Download complete eabd8714fec9 Extracting [> ] 557.1kB/375MB 85dde7dceb0a Extracting [==> ] 2.785MB/63.48MB 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete 6ac0e4adf315 Downloading [=======================> ] 28.65MB/62.07MB f3b09c502777 Downloading [===========> ] 13.52MB/56.52MB bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Download complete 55f2b468da67 Extracting [==========================> ] 135.9MB/257.9MB eabd8714fec9 Extracting [=> ] 10.58MB/375MB 6ac0e4adf315 Downloading [==========================> ] 32.98MB/62.07MB 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Verifying Checksum 1ccde423731d Download complete f3b09c502777 Downloading [====================> ] 22.71MB/56.52MB 85dde7dceb0a Extracting [===> ] 4.456MB/63.48MB 55f2b468da67 Extracting [===========================> ] 140.4MB/257.9MB eabd8714fec9 Extracting [==> ] 17.83MB/375MB 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete 6ac0e4adf315 Downloading [==============================> ] 38.39MB/62.07MB f3b09c502777 Downloading [============================> ] 32.44MB/56.52MB 7df673c7455d Download complete 85dde7dceb0a Extracting [===> ] 5.014MB/63.48MB 55f2b468da67 Extracting [============================> ] 145.4MB/257.9MB eabd8714fec9 Extracting [==> ] 22.28MB/375MB 6ac0e4adf315 Downloading [====================================> ] 45.42MB/62.07MB f3b09c502777 Downloading [=========================================> ] 46.5MB/56.52MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 55f2b468da67 Extracting [=============================> ] 149.8MB/257.9MB f3b09c502777 Verifying Checksum f3b09c502777 Download complete 6ac0e4adf315 Downloading [=========================================> ] 51.9MB/62.07MB 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete eabd8714fec9 Extracting [===> ] 23.95MB/375MB 2d429b9e73a6 Downloading [========> ] 5.012MB/29.13MB 85dde7dceb0a Extracting [======> ] 7.799MB/63.48MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB 55f2b468da67 Extracting [=============================> ] 152.6MB/257.9MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete eabd8714fec9 Extracting [====> ] 32.87MB/375MB 2d429b9e73a6 Downloading [=========================> ] 14.74MB/29.13MB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 85dde7dceb0a Extracting [=======> ] 9.47MB/63.48MB ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 55f2b468da67 Extracting [==============================> ] 155.4MB/257.9MB 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Verifying Checksum e27c75a98748 Download complete eabd8714fec9 Extracting [=====> ] 42.34MB/375MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 2d429b9e73a6 Downloading [===============================================> ] 27.72MB/29.13MB 531ee2cf3c0c Downloading [===================================> ] 5.651MB/8.066MB 85dde7dceb0a Extracting [========> ] 11.14MB/63.48MB 2d429b9e73a6 Download complete e73cb4a42719 Downloading [> ] 539.6kB/109.1MB a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Download complete 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete 787d6bee9571 Downloading [==================================================>] 127B/127B 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete 55f2b468da67 Extracting [==============================> ] 158.8MB/257.9MB 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete eabd8714fec9 Extracting [======> ] 51.81MB/375MB 6ac0e4adf315 Extracting [===> ] 4.456MB/62.07MB 85dde7dceb0a Extracting [=========> ] 12.26MB/63.48MB e73cb4a42719 Downloading [===> ] 8.65MB/109.1MB 55f2b468da67 Extracting [===============================> ] 162.7MB/257.9MB 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB eabd8714fec9 Extracting [========> ] 60.16MB/375MB 6ac0e4adf315 Extracting [=====> ] 6.685MB/62.07MB e73cb4a42719 Downloading [=========> ] 20MB/109.1MB 85dde7dceb0a Extracting [===========> ] 15.04MB/63.48MB 55f2b468da67 Extracting [================================> ] 165.4MB/257.9MB 2d429b9e73a6 Extracting [=====> ] 2.949MB/29.13MB eabd8714fec9 Extracting [========> ] 66.29MB/375MB 6ac0e4adf315 Extracting [=======> ] 8.913MB/62.07MB e73cb4a42719 Downloading [==============> ] 32.44MB/109.1MB eabd8714fec9 Extracting [=========> ] 74.65MB/375MB 2d429b9e73a6 Extracting [========> ] 5.014MB/29.13MB 6ac0e4adf315 Extracting [=========> ] 11.7MB/62.07MB 85dde7dceb0a Extracting [=============> ] 16.71MB/63.48MB 55f2b468da67 Extracting [================================> ] 169.3MB/257.9MB e73cb4a42719 Downloading [====================> ] 45.42MB/109.1MB 2d429b9e73a6 Extracting [============> ] 7.373MB/29.13MB eabd8714fec9 Extracting [==========> ] 80.77MB/375MB 6ac0e4adf315 Extracting [===========> ] 14.48MB/62.07MB e73cb4a42719 Downloading [===========================> ] 58.93MB/109.1MB 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB eabd8714fec9 Extracting [===========> ] 85.79MB/375MB 2d429b9e73a6 Extracting [=================> ] 10.03MB/29.13MB 6ac0e4adf315 Extracting [=============> ] 16.15MB/62.07MB e73cb4a42719 Downloading [================================> ] 71.37MB/109.1MB 55f2b468da67 Extracting [=================================> ] 171.6MB/257.9MB 85dde7dceb0a Extracting [=============> ] 17.27MB/63.48MB 2d429b9e73a6 Extracting [===================> ] 11.5MB/29.13MB eabd8714fec9 Extracting [============> ] 91.91MB/375MB 6ac0e4adf315 Extracting [===============> ] 18.94MB/62.07MB e73cb4a42719 Downloading [======================================> ] 84.88MB/109.1MB 2d429b9e73a6 Extracting [========================> ] 14.16MB/29.13MB 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB eabd8714fec9 Extracting [=============> ] 98.6MB/375MB 85dde7dceb0a Extracting [===============> ] 19.5MB/63.48MB 6ac0e4adf315 Extracting [==================> ] 23.4MB/62.07MB e73cb4a42719 Downloading [===========================================> ] 95.7MB/109.1MB 2d429b9e73a6 Extracting [=============================> ] 17.1MB/29.13MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB eabd8714fec9 Extracting [=============> ] 104.7MB/375MB 85dde7dceb0a Extracting [================> ] 21.17MB/63.48MB e73cb4a42719 Downloading [=================================================> ] 107.1MB/109.1MB 6ac0e4adf315 Extracting [===================> ] 24.51MB/62.07MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete 2d429b9e73a6 Extracting [===================================> ] 20.64MB/29.13MB eabd8714fec9 Extracting [==============> ] 108.1MB/375MB 55f2b468da67 Extracting [==================================> ] 175.5MB/257.9MB 85dde7dceb0a Extracting [==================> ] 23.95MB/63.48MB 6ac0e4adf315 Extracting [=======================> ] 28.97MB/62.07MB 2d429b9e73a6 Extracting [========================================> ] 23.59MB/29.13MB eabd8714fec9 Extracting [==============> ] 112MB/375MB 85dde7dceb0a Extracting [=====================> ] 26.74MB/63.48MB 55f2b468da67 Extracting [==================================> ] 177.7MB/257.9MB 6ac0e4adf315 Extracting [=========================> ] 31.75MB/62.07MB eabd8714fec9 Extracting [===============> ] 114.8MB/375MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 6ac0e4adf315 Extracting [==============================> ] 37.88MB/62.07MB 85dde7dceb0a Extracting [======================> ] 28.41MB/63.48MB 55f2b468da67 Extracting [==================================> ] 180.5MB/257.9MB eabd8714fec9 Extracting [===============> ] 117MB/375MB 2d429b9e73a6 Extracting [===============================================> ] 27.43MB/29.13MB 6ac0e4adf315 Extracting [====================================> ] 45.12MB/62.07MB 85dde7dceb0a Extracting [=======================> ] 30.08MB/63.48MB 55f2b468da67 Extracting [===================================> ] 183.3MB/257.9MB eabd8714fec9 Extracting [===============> ] 119.2MB/375MB 6ac0e4adf315 Extracting [=============================================> ] 56.82MB/62.07MB 55f2b468da67 Extracting [====================================> ] 186.6MB/257.9MB 85dde7dceb0a Extracting [=========================> ] 31.75MB/63.48MB eabd8714fec9 Extracting [================> ] 122.6MB/375MB 6ac0e4adf315 Extracting [=================================================> ] 61.28MB/62.07MB 55f2b468da67 Extracting [====================================> ] 188.8MB/257.9MB 85dde7dceb0a Extracting [==========================> ] 33.98MB/63.48MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB eabd8714fec9 Extracting [================> ] 125.9MB/375MB 55f2b468da67 Extracting [====================================> ] 190.5MB/257.9MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 85dde7dceb0a Extracting [============================> ] 36.21MB/63.48MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB eabd8714fec9 Extracting [=================> ] 128.1MB/375MB 55f2b468da67 Extracting [=====================================> ] 193.3MB/257.9MB 85dde7dceb0a Extracting [=============================> ] 37.88MB/63.48MB eabd8714fec9 Extracting [=================> ] 130.9MB/375MB 55f2b468da67 Extracting [=====================================> ] 195MB/257.9MB 85dde7dceb0a Extracting [===============================> ] 40.11MB/63.48MB eabd8714fec9 Extracting [=================> ] 134.3MB/375MB 85dde7dceb0a Extracting [================================> ] 40.67MB/63.48MB 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB eabd8714fec9 Extracting [==================> ] 138.7MB/375MB 85dde7dceb0a Extracting [==================================> ] 43.45MB/63.48MB 55f2b468da67 Extracting [======================================> ] 197.2MB/257.9MB 85dde7dceb0a Extracting [====================================> ] 46.24MB/63.48MB eabd8714fec9 Extracting [===================> ] 142.6MB/375MB 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB 85dde7dceb0a Extracting [======================================> ] 49.02MB/63.48MB eabd8714fec9 Extracting [===================> ] 146.5MB/375MB 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB eabd8714fec9 Extracting [====================> ] 151MB/375MB 85dde7dceb0a Extracting [========================================> ] 51.81MB/63.48MB 55f2b468da67 Extracting [=======================================> ] 205MB/257.9MB eabd8714fec9 Extracting [====================> ] 155.4MB/375MB 85dde7dceb0a Extracting [==========================================> ] 54.59MB/63.48MB eabd8714fec9 Extracting [====================> ] 156.5MB/375MB 85dde7dceb0a Extracting [============================================> ] 56.26MB/63.48MB 55f2b468da67 Extracting [========================================> ] 206.7MB/257.9MB eabd8714fec9 Extracting [=====================> ] 159.9MB/375MB eabd8714fec9 Extracting [======================> ] 166MB/375MB 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB eabd8714fec9 Extracting [======================> ] 168.8MB/375MB eabd8714fec9 Extracting [========================> ] 182.2MB/375MB 85dde7dceb0a Extracting [==============================================> ] 59.05MB/63.48MB 2d429b9e73a6 Pull complete 6ac0e4adf315 Pull complete eabd8714fec9 Extracting [=========================> ] 192.2MB/375MB 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB eabd8714fec9 Extracting [==========================> ] 200.5MB/375MB 85dde7dceb0a Extracting [==============================================> ] 59.6MB/63.48MB 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB eabd8714fec9 Extracting [===========================> ] 207.2MB/375MB 85dde7dceb0a Extracting [=================================================> ] 62.95MB/63.48MB 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB 55f2b468da67 Extracting [=========================================> ] 211.7MB/257.9MB eabd8714fec9 Extracting [============================> ] 213.4MB/375MB eabd8714fec9 Extracting [============================> ] 217.3MB/375MB 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB eabd8714fec9 Extracting [=============================> ] 222.3MB/375MB 55f2b468da67 Extracting [=========================================> ] 215.6MB/257.9MB eabd8714fec9 Extracting [==============================> ] 226.7MB/375MB 55f2b468da67 Extracting [==========================================> ] 219.5MB/257.9MB eabd8714fec9 Extracting [==============================> ] 231.2MB/375MB 55f2b468da67 Extracting [===========================================> ] 222.3MB/257.9MB eabd8714fec9 Extracting [===============================> ] 235.1MB/375MB 55f2b468da67 Extracting [===========================================> ] 225.1MB/257.9MB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB eabd8714fec9 Extracting [===============================> ] 236.7MB/375MB 55f2b468da67 Extracting [===========================================> ] 225.6MB/257.9MB eabd8714fec9 Extracting [===============================> ] 239.5MB/375MB 55f2b468da67 Extracting [============================================> ] 227.8MB/257.9MB eabd8714fec9 Extracting [================================> ] 242.3MB/375MB 55f2b468da67 Extracting [============================================> ] 229.5MB/257.9MB eabd8714fec9 Extracting [================================> ] 245.7MB/375MB 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB eabd8714fec9 Extracting [=================================> ] 249MB/375MB 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB eabd8714fec9 Extracting [=================================> ] 252.9MB/375MB 55f2b468da67 Extracting [=============================================> ] 234.5MB/257.9MB eabd8714fec9 Extracting [==================================> ] 256.2MB/375MB eabd8714fec9 Extracting [==================================> ] 259.6MB/375MB 55f2b468da67 Extracting [=============================================> ] 236.7MB/257.9MB eabd8714fec9 Extracting [===================================> ] 263.5MB/375MB 55f2b468da67 Extracting [==============================================> ] 241.8MB/257.9MB eabd8714fec9 Extracting [===================================> ] 267.9MB/375MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB 55f2b468da67 Extracting [=================================================> ] 255.7MB/257.9MB 85dde7dceb0a Pull complete f3b09c502777 Extracting [===> ] 4.456MB/56.52MB f3b09c502777 Extracting [====> ] 5.014MB/56.52MB 55f2b468da67 Extracting [=================================================> ] 256.8MB/257.9MB eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB f3b09c502777 Extracting [=======> ] 8.356MB/56.52MB eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB f3b09c502777 Extracting [==========> ] 12.26MB/56.52MB eabd8714fec9 Extracting [====================================> ] 277.4MB/375MB f3b09c502777 Extracting [==============> ] 16.71MB/56.52MB 46eab5b44a35 Pull complete f3b09c502777 Extracting [===============> ] 17.83MB/56.52MB f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB eabd8714fec9 Extracting [=====================================> ] 280.8MB/375MB f3b09c502777 Extracting [=====================> ] 23.95MB/56.52MB eabd8714fec9 Extracting [======================================> ] 288MB/375MB f3b09c502777 Extracting [==========================> ] 30.08MB/56.52MB eabd8714fec9 Extracting [=======================================> ] 293.6MB/375MB f3b09c502777 Extracting [=======================================> ] 44.56MB/56.52MB eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB f3b09c502777 Extracting [================================================> ] 55.15MB/56.52MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB eabd8714fec9 Extracting [========================================> ] 304.7MB/375MB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB c4d302cc468d Extracting [> ] 65.54kB/4.534MB 55f2b468da67 Pull complete 82bfc142787e Extracting [> ] 98.3kB/8.613MB f3b09c502777 Pull complete 7009d5001b77 Pull complete 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 82bfc142787e Extracting [==> ] 491.5kB/8.613MB eabd8714fec9 Extracting [=========================================> ] 312MB/375MB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 82bfc142787e Extracting [================================================> ] 8.356MB/8.613MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB eabd8714fec9 Extracting [==========================================> ] 317MB/375MB eabd8714fec9 Extracting [==========================================> ] 319.8MB/375MB eabd8714fec9 Extracting [==========================================> ] 322MB/375MB eabd8714fec9 Extracting [===========================================> ] 325.9MB/375MB 408012a7b118 Pull complete 82bfc142787e Pull complete 538deb30e80c Pull complete eabd8714fec9 Extracting [===========================================> ] 326.4MB/375MB c4d302cc468d Pull complete eabd8714fec9 Extracting [===========================================> ] 327MB/375MB eabd8714fec9 Extracting [===========================================> ] 328.7MB/375MB eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB eabd8714fec9 Extracting [============================================> ] 333.7MB/375MB eabd8714fec9 Extracting [=============================================> ] 339.2MB/375MB eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB eabd8714fec9 Extracting [=============================================> ] 344.8MB/375MB eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB eabd8714fec9 Extracting [==============================================> ] 351.5MB/375MB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB eabd8714fec9 Extracting [================================================> ] 362.6MB/375MB eabd8714fec9 Extracting [=================================================> ] 368.8MB/375MB eabd8714fec9 Extracting [=================================================> ] 374.9MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB 44986281b8b9 Pull complete 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB eabd8714fec9 Pull complete bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB grafana Pulled 01e0882c90d9 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 46baca71a4ef Pull complete 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB bf70c5107ab5 Pull complete 45fd2fec8a19 Pull complete 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 531ee2cf3c0c Extracting [======> ] 983kB/8.066MB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 8f10199ed94b Extracting [============> ] 2.163MB/8.768MB 1ccde423731d Pull complete 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 531ee2cf3c0c Extracting [===========================> ] 4.424MB/8.066MB b0e0ef7895f4 Extracting [=========> ] 7.078MB/37.01MB 8f10199ed94b Pull complete 7221d93db8a9 Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B 531ee2cf3c0c Extracting [=====================================> ] 5.997MB/8.066MB b0e0ef7895f4 Extracting [=====================> ] 16.12MB/37.01MB 531ee2cf3c0c Extracting [===============================================> ] 7.668MB/8.066MB b0e0ef7895f4 Extracting [=============================> ] 22.02MB/37.01MB f963a77d2726 Pull complete 7df673c7455d Pull complete 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB prometheus Pulled 531ee2cf3c0c Pull complete ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB b0e0ef7895f4 Extracting [==========================================> ] 31.46MB/37.01MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB b0e0ef7895f4 Pull complete ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB f3a82e9f1761 Extracting [============> ] 11.01MB/44.41MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Pull complete 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B f3a82e9f1761 Extracting [=========================> ] 22.48MB/44.41MB c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B f3a82e9f1761 Extracting [============================> ] 25.69MB/44.41MB 12c5c803443f Pull complete e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B e27c75a98748 Pull complete f3a82e9f1761 Extracting [==========================================> ] 37.62MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB e73cb4a42719 Extracting [===> ] 6.685MB/109.1MB 40a5eed61bb0 Pull complete e73cb4a42719 Extracting [====> ] 10.03MB/109.1MB 79161a3f5362 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB e73cb4a42719 Extracting [======> ] 15.04MB/109.1MB e040ea11fa10 Pull complete e73cb4a42719 Extracting [=========> ] 20.61MB/109.1MB 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B e73cb4a42719 Extracting [===========> ] 25.62MB/109.1MB 09d5a3f70313 Extracting [======> ] 15.04MB/109.2MB e73cb4a42719 Extracting [==============> ] 31.2MB/109.1MB 2e8a7df9c2ee Pull complete 09d5a3f70313 Extracting [============> ] 26.74MB/109.2MB 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B e73cb4a42719 Extracting [=================> ] 37.32MB/109.1MB 09d5a3f70313 Extracting [================> ] 36.77MB/109.2MB 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B e73cb4a42719 Extracting [===================> ] 43.45MB/109.1MB 09d5a3f70313 Extracting [========================> ] 52.92MB/109.2MB 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB e73cb4a42719 Extracting [======================> ] 49.02MB/109.1MB 09d5a3f70313 Extracting [================================> ] 70.19MB/109.2MB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB e73cb4a42719 Extracting [========================> ] 52.36MB/109.1MB 09d5a3f70313 Extracting [=======================================> ] 85.23MB/109.2MB 71a9f6a9ab4d Pull complete e73cb4a42719 Extracting [=========================> ] 55.15MB/109.1MB 09d5a3f70313 Extracting [==============================================> ] 101.4MB/109.2MB da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 09d5a3f70313 Extracting [=================================================> ] 107.5MB/109.2MB e73cb4a42719 Extracting [===========================> ] 59.05MB/109.1MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB da3ed5db7103 Extracting [====> ] 12.26MB/127.4MB da3ed5db7103 Extracting [======> ] 17.83MB/127.4MB e73cb4a42719 Extracting [============================> ] 62.39MB/109.1MB 09d5a3f70313 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB da3ed5db7103 Extracting [==========> ] 26.74MB/127.4MB e73cb4a42719 Extracting [===============================> ] 69.63MB/109.1MB 356f5c2c843b Pull complete da3ed5db7103 Extracting [===============> ] 40.11MB/127.4MB kafka Pulled e73cb4a42719 Extracting [==================================> ] 75.76MB/109.1MB da3ed5db7103 Extracting [======================> ] 56.82MB/127.4MB e73cb4a42719 Extracting [=====================================> ] 82.44MB/109.1MB da3ed5db7103 Extracting [============================> ] 71.86MB/127.4MB e73cb4a42719 Extracting [=========================================> ] 90.8MB/109.1MB da3ed5db7103 Extracting [==================================> ] 86.9MB/127.4MB e73cb4a42719 Extracting [==========================================> ] 93.59MB/109.1MB da3ed5db7103 Extracting [=======================================> ] 101.9MB/127.4MB e73cb4a42719 Extracting [============================================> ] 97.48MB/109.1MB da3ed5db7103 Extracting [=============================================> ] 116.4MB/127.4MB e73cb4a42719 Extracting [==============================================> ] 100.8MB/109.1MB da3ed5db7103 Extracting [===============================================> ] 120.3MB/127.4MB e73cb4a42719 Extracting [===============================================> ] 104.2MB/109.1MB da3ed5db7103 Extracting [================================================> ] 124.8MB/127.4MB e73cb4a42719 Extracting [=================================================> ] 107MB/109.1MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB c955f6e31a04 Pull complete zookeeper Pulled e73cb4a42719 Pull complete a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Pull complete 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Pull complete 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Pull complete postgres Pulled Network compose_default Creating Network compose_default Created Container prometheus Creating Container postgres Creating Container zookeeper Creating Container postgres Created Container prometheus Created Container zookeeper Created Container policy-db-migrator Creating Container grafana Creating Container kafka Creating Container grafana Created Container kafka Created Container policy-db-migrator Created Container policy-api Creating Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-xacml-pdp Creating Container policy-xacml-pdp Created Container zookeeper Starting Container prometheus Starting Container postgres Starting Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container zookeeper Started Container kafka Starting Container policy-api Started Container kafka Started Container policy-pap Starting Container policy-pap Started Container policy-xacml-pdp Starting Container prometheus Started Container grafana Starting Container grafana Started Container policy-xacml-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 1 minute for xacml-pdp to start... Checking if REST port 30004 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute Cloning into '/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/tests/models'... Building robot framework docker image sha256:d57e8922bb33c5a46b6292570b98c67c00f72f2f7119382d0ac3d19b6f4899e6 top - 14:49:51 up 4 min, 0 users, load average: 2.28, 1.72, 0.75 Tasks: 228 total, 1 running, 151 sleeping, 0 stopped, 0 zombie %Cpu(s): 13.6 us, 3.2 sy, 0.0 ni, 77.5 id, 5.6 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.5G 21G 27M 7.1G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dc774af3111d policy-xacml-pdp 0.58% 173.8MiB / 31.41GiB 0.54% 45.9kB / 56.6kB 0B / 4.1kB 51 ec712e5df6fc policy-pap 1.35% 462.8MiB / 31.41GiB 1.44% 2.14MB / 1.1MB 45.1kB / 139MB 68 341861856f70 policy-api 0.09% 448.7MiB / 31.41GiB 1.39% 1.15MB / 1.02MB 0B / 12.3kB 59 d7b10b44dc16 kafka 2.72% 389.2MiB / 31.41GiB 1.21% 191kB / 183kB 0B / 578kB 83 0bfa60b22fec grafana 0.16% 111.2MiB / 31.41GiB 0.35% 19.1MB / 258kB 0B / 31.8MB 18 aa09ae405794 zookeeper 0.08% 89.42MiB / 31.41GiB 0.28% 64.6kB / 54.6kB 225kB / 369kB 62 5bb225032399 prometheus 0.04% 21.08MiB / 31.41GiB 0.07% 131kB / 5.27kB 0B / 0B 13 e3bcee0414d7 postgres 0.00% 85.58MiB / 31.41GiB 0.27% 2.56MB / 3.75MB 4.1kB / 158MB 26 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | MakeTopics :: Creates the Policy topics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteXacmlPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | policy-csit | 4 tests, 4 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | policy-csit | 6 tests, 6 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-18T14:48:04.329360467Z level=info msg="Starting Grafana" version=12.0.2 commit=5bda17e7c1cb313eb96266f2fdda73a6b35c3977 branch=HEAD compiled=2025-06-18T14:48:04Z grafana | logger=settings t=2025-06-18T14:48:04.329718695Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-18T14:48:04.329731475Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-18T14:48:04.329736725Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-18T14:48:04.329741595Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-18T14:48:04.329745265Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-18T14:48:04.329749495Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-18T14:48:04.329753456Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-18T14:48:04.329757566Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-18T14:48:04.329760756Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-18T14:48:04.329764046Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-18T14:48:04.329769176Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-18T14:48:04.329773106Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-18T14:48:04.329781146Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-18T14:48:04.329790966Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-18T14:48:04.329794266Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-18T14:48:04.329802017Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-18T14:48:04.329806587Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-18T14:48:04.329811417Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-18T14:48:04.330167544Z level=info msg=FeatureToggles ssoSettingsSAML=true dashgpt=true unifiedStorageSearchPermissionFiltering=true publicDashboardsScene=true logsPanelControls=true angularDeprecationUI=true reportingUseRawTimeRange=true dataplaneFrontendFallback=true tlsMemcached=true failWrongDSUID=true logRowsPopoverMenu=true alertingRulePermanentlyDelete=true logsInfiniteScrolling=true azureMonitorPrometheusExemplars=true recordedQueriesMulti=true panelMonitoring=true cloudWatchNewLabelParsing=true prometheusAzureOverrideAudience=true kubernetesPlaylists=true unifiedRequestLog=true groupToNestedTableTransformation=true newDashboardSharingComponent=true lokiStructuredMetadata=true alertingInsights=true alertRuleRestore=true ssoSettingsApi=true lokiQueryHints=true cloudWatchRoundUpEndTime=true awsAsyncQueryCaching=true lokiLabelNamesQueryApi=true alertingUIOptimizeReducer=true recoveryThreshold=true lokiQuerySplitting=true annotationPermissionUpdate=true dashboardScene=true cloudWatchCrossAccountQuerying=true grafanaconThemes=true alertingApiServer=true promQLScope=true alertingRuleVersionHistoryRestore=true alertingNotificationsStepMode=true logsExploreTableVisualisation=true alertingRuleRecoverDeleted=true nestedFolders=true pluginsDetailsRightPanel=true correlations=true influxdbBackendMigration=true alertingSimplifiedRouting=true useSessionStorageForRedirection=true kubernetesClientDashboardsFolders=true externalCorePlugins=true preinstallAutoUpdate=true prometheusUsesCombobox=true onPremToCloudMigrations=true dashboardSceneSolo=true newPDFRendering=true dashboardSceneForViewers=true alertingQueryAndExpressionsStepMode=true transformationsRedesign=true formatString=true pinNavItems=true addFieldFromCalculationStatFunctions=true newFiltersUI=true azureMonitorEnableUserAuth=true logsContextDatasourceUi=true grafana | logger=sqlstore t=2025-06-18T14:48:04.330230855Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-18T14:48:04.330250215Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-18T14:48:04.332272326Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-18T14:48:04.332288086Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-18T14:48:04.332996031Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-18T14:48:04.33393705Z level=info msg="Migration successfully executed" id="create migration_log table" duration=940.629µs grafana | logger=migrator t=2025-06-18T14:48:04.338009202Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-18T14:48:04.338576534Z level=info msg="Migration successfully executed" id="create user table" duration=567.302µs grafana | logger=migrator t=2025-06-18T14:48:04.343379491Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-18T14:48:04.344115216Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=735.235µs grafana | logger=migrator t=2025-06-18T14:48:04.347342562Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-18T14:48:04.348069397Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=726.075µs grafana | logger=migrator t=2025-06-18T14:48:04.351305542Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-18T14:48:04.351963785Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=657.903µs grafana | logger=migrator t=2025-06-18T14:48:04.356958557Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-18T14:48:04.357592769Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=634.292µs grafana | logger=migrator t=2025-06-18T14:48:04.36106806Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-18T14:48:04.363450809Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.382319ms grafana | logger=migrator t=2025-06-18T14:48:04.366712495Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-18T14:48:04.367538642Z level=info msg="Migration successfully executed" id="create user table v2" duration=825.657µs grafana | logger=migrator t=2025-06-18T14:48:04.372551203Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-18T14:48:04.373251928Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=700.055µs grafana | logger=migrator t=2025-06-18T14:48:04.376192297Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-18T14:48:04.376892361Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=699.644µs grafana | logger=migrator t=2025-06-18T14:48:04.3807829Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-18T14:48:04.381153827Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=371.087µs grafana | logger=migrator t=2025-06-18T14:48:04.385093687Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-18T14:48:04.385583887Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=489.89µs grafana | logger=migrator t=2025-06-18T14:48:04.389868455Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-18T14:48:04.39212654Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=2.256985ms grafana | logger=migrator t=2025-06-18T14:48:04.395192012Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-18T14:48:04.395220283Z level=info msg="Migration successfully executed" id="Update user table charset" duration=28.831µs grafana | logger=migrator t=2025-06-18T14:48:04.397500389Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-18T14:48:04.398582422Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.081833ms grafana | logger=migrator t=2025-06-18T14:48:04.403384578Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-18T14:48:04.403642404Z level=info msg="Migration successfully executed" id="Add missing user data" duration=257.675µs grafana | logger=migrator t=2025-06-18T14:48:04.405975391Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-18T14:48:04.407035873Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.058752ms grafana | logger=migrator t=2025-06-18T14:48:04.411185547Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-18T14:48:04.411902051Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=715.704µs grafana | logger=migrator t=2025-06-18T14:48:04.414224459Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-18T14:48:04.415328011Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.102942ms grafana | logger=migrator t=2025-06-18T14:48:04.419970415Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-18T14:48:04.42807385Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.102725ms grafana | logger=migrator t=2025-06-18T14:48:04.43202286Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-18T14:48:04.433122612Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.099122ms grafana | logger=migrator t=2025-06-18T14:48:04.436143024Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-18T14:48:04.436356098Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=213.214µs grafana | logger=migrator t=2025-06-18T14:48:04.439806827Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-18T14:48:04.440529853Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=722.356µs grafana | logger=migrator t=2025-06-18T14:48:04.446610606Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-18T14:48:04.447836761Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.225965ms grafana | logger=migrator t=2025-06-18T14:48:04.450805641Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-18T14:48:04.451129287Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=325.026µs grafana | logger=migrator t=2025-06-18T14:48:04.455244331Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-18T14:48:04.455834343Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=590.102µs grafana | logger=migrator t=2025-06-18T14:48:04.460957796Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-18T14:48:04.461443537Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=479.241µs grafana | logger=migrator t=2025-06-18T14:48:04.464530249Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-18T14:48:04.464890696Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=360.157µs grafana | logger=migrator t=2025-06-18T14:48:04.46805456Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-18T14:48:04.468868367Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=813.197µs grafana | logger=migrator t=2025-06-18T14:48:04.472072932Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-18T14:48:04.472819348Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=745.786µs grafana | logger=migrator t=2025-06-18T14:48:04.477682646Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-18T14:48:04.478532724Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=849.507µs grafana | logger=migrator t=2025-06-18T14:48:04.481423862Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-18T14:48:04.482167827Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=743.785µs grafana | logger=migrator t=2025-06-18T14:48:04.484936343Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-18T14:48:04.485611517Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=674.464µs grafana | logger=migrator t=2025-06-18T14:48:04.520608307Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-18T14:48:04.520644198Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=37.071µs grafana | logger=migrator t=2025-06-18T14:48:04.525307642Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-18T14:48:04.526316873Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.009011ms grafana | logger=migrator t=2025-06-18T14:48:04.5296462Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-18T14:48:04.530655751Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.009251ms grafana | logger=migrator t=2025-06-18T14:48:04.53553866Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-18T14:48:04.536192573Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=653.793µs grafana | logger=migrator t=2025-06-18T14:48:04.539149623Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-18T14:48:04.539815397Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=665.184µs grafana | logger=migrator t=2025-06-18T14:48:04.542131993Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-18T14:48:04.545302908Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.170085ms grafana | logger=migrator t=2025-06-18T14:48:04.54983986Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-18T14:48:04.550709407Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=845.377µs grafana | logger=migrator t=2025-06-18T14:48:04.55376352Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-18T14:48:04.554470964Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=706.804µs grafana | logger=migrator t=2025-06-18T14:48:04.557530166Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-18T14:48:04.558242171Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=711.535µs grafana | logger=migrator t=2025-06-18T14:48:04.562070519Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-18T14:48:04.562741412Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=670.473µs grafana | logger=migrator t=2025-06-18T14:48:04.641872618Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-18T14:48:04.642946189Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.073341ms grafana | logger=migrator t=2025-06-18T14:48:04.647093083Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-18T14:48:04.647467561Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=374.287µs grafana | logger=migrator t=2025-06-18T14:48:04.650678456Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-18T14:48:04.651147765Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=468.789µs grafana | logger=migrator t=2025-06-18T14:48:04.65583297Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-18T14:48:04.656345851Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=512.651µs grafana | logger=migrator t=2025-06-18T14:48:04.659994764Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-18T14:48:04.661216529Z level=info msg="Migration successfully executed" id="create star table" duration=1.224285ms grafana | logger=migrator t=2025-06-18T14:48:04.664077507Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-18T14:48:04.664724491Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=646.674µs grafana | logger=migrator t=2025-06-18T14:48:04.669413816Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-18T14:48:04.670975808Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.561712ms grafana | logger=migrator t=2025-06-18T14:48:04.67452692Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-18T14:48:04.676117472Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.585312ms grafana | logger=migrator t=2025-06-18T14:48:04.681526972Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-18T14:48:04.683028512Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.50057ms grafana | logger=migrator t=2025-06-18T14:48:04.687520463Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-18T14:48:04.689373081Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=1.853238ms grafana | logger=migrator t=2025-06-18T14:48:04.6947771Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-18T14:48:04.695674119Z level=info msg="Migration successfully executed" id="create org table v1" duration=896.229µs grafana | logger=migrator t=2025-06-18T14:48:04.698913854Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-18T14:48:04.699744561Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=830.237µs grafana | logger=migrator t=2025-06-18T14:48:04.705843385Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-18T14:48:04.706793804Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=949.059µs grafana | logger=migrator t=2025-06-18T14:48:04.714157934Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-18T14:48:04.71544961Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.290186ms grafana | logger=migrator t=2025-06-18T14:48:04.718696005Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-18T14:48:04.719957471Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.262196ms grafana | logger=migrator t=2025-06-18T14:48:04.723156317Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-18T14:48:04.724965043Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.808576ms grafana | logger=migrator t=2025-06-18T14:48:04.730755651Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-18T14:48:04.730855263Z level=info msg="Migration successfully executed" id="Update org table charset" duration=98.922µs grafana | logger=migrator t=2025-06-18T14:48:04.734419435Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-18T14:48:04.734447316Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=28.071µs grafana | logger=migrator t=2025-06-18T14:48:04.737654151Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-18T14:48:04.737953556Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=301.566µs grafana | logger=migrator t=2025-06-18T14:48:04.741178631Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-18T14:48:04.742119051Z level=info msg="Migration successfully executed" id="create dashboard table" duration=939.49µs grafana | logger=migrator t=2025-06-18T14:48:04.752064203Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-18T14:48:04.755379171Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=3.315077ms grafana | logger=migrator t=2025-06-18T14:48:04.761131557Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-18T14:48:04.761997894Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=865.537µs grafana | logger=migrator t=2025-06-18T14:48:04.765452605Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-18T14:48:04.766558658Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.104023ms grafana | logger=migrator t=2025-06-18T14:48:04.770206641Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-18T14:48:04.771593509Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.385798ms grafana | logger=migrator t=2025-06-18T14:48:04.777537979Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-18T14:48:04.778377897Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=839.258µs grafana | logger=migrator t=2025-06-18T14:48:04.78195203Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-18T14:48:04.79137158Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=9.41744ms grafana | logger=migrator t=2025-06-18T14:48:04.794423662Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-18T14:48:04.795014544Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=589.322µs grafana | logger=migrator t=2025-06-18T14:48:04.800118708Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-18T14:48:04.800821092Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=699.794µs grafana | logger=migrator t=2025-06-18T14:48:04.806050089Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-18T14:48:04.807434717Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.383198ms grafana | logger=migrator t=2025-06-18T14:48:04.814915768Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-18T14:48:04.815355607Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=439.658µs grafana | logger=migrator t=2025-06-18T14:48:04.819857449Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-18T14:48:04.82143434Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.575782ms grafana | logger=migrator t=2025-06-18T14:48:04.827572285Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-18T14:48:04.827591245Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=22.96µs grafana | logger=migrator t=2025-06-18T14:48:04.83176452Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-18T14:48:04.834895653Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.130373ms grafana | logger=migrator t=2025-06-18T14:48:04.841523978Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-18T14:48:04.843437026Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.912238ms grafana | logger=migrator t=2025-06-18T14:48:04.846965218Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-18T14:48:04.848919968Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.95037ms grafana | logger=migrator t=2025-06-18T14:48:04.855576523Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-18T14:48:04.856353479Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=777.046µs grafana | logger=migrator t=2025-06-18T14:48:04.865849591Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-18T14:48:04.867945814Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.097863ms grafana | logger=migrator t=2025-06-18T14:48:04.872957116Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-18T14:48:04.873746911Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=788.615µs grafana | logger=migrator t=2025-06-18T14:48:04.876800813Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-18T14:48:04.877517198Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=715.995µs grafana | logger=migrator t=2025-06-18T14:48:04.883743674Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-18T14:48:04.883773365Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=29.841µs grafana | logger=migrator t=2025-06-18T14:48:04.893942272Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-18T14:48:04.894026733Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=89.281µs grafana | logger=migrator t=2025-06-18T14:48:04.90079885Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-18T14:48:04.904414104Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.620934ms grafana | logger=migrator t=2025-06-18T14:48:04.908750482Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-18T14:48:04.911879975Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=3.137583ms grafana | logger=migrator t=2025-06-18T14:48:04.915754874Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-18T14:48:04.917789115Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.033951ms grafana | logger=migrator t=2025-06-18T14:48:04.920922519Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-18T14:48:04.922876698Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.953169ms grafana | logger=migrator t=2025-06-18T14:48:04.927366619Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-18T14:48:04.927593665Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=224.576µs grafana | logger=migrator t=2025-06-18T14:48:04.931220808Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-18T14:48:04.931984043Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=763.126µs grafana | logger=migrator t=2025-06-18T14:48:04.936392203Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-18T14:48:04.937345822Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=951.359µs grafana | logger=migrator t=2025-06-18T14:48:04.942469726Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-18T14:48:04.942510116Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=41.311µs grafana | logger=migrator t=2025-06-18T14:48:04.946333644Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-18T14:48:04.947160312Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=826.008µs grafana | logger=migrator t=2025-06-18T14:48:04.950725093Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-18T14:48:04.951421588Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=697.915µs grafana | logger=migrator t=2025-06-18T14:48:04.955933279Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-18T14:48:04.965124275Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=9.191806ms grafana | logger=migrator t=2025-06-18T14:48:04.968653957Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-18T14:48:04.969534125Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=879.428µs grafana | logger=migrator t=2025-06-18T14:48:04.974878803Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-18T14:48:04.975838943Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=959.81µs grafana | logger=migrator t=2025-06-18T14:48:04.979547958Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-18T14:48:04.980521537Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=973.029µs grafana | logger=migrator t=2025-06-18T14:48:05.015489304Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-18T14:48:05.016142988Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=653.844µs grafana | logger=migrator t=2025-06-18T14:48:05.02226727Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-18T14:48:05.022833803Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=566.442µs grafana | logger=migrator t=2025-06-18T14:48:05.029808863Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-18T14:48:05.033228861Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.418748ms grafana | logger=migrator t=2025-06-18T14:48:05.036832954Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-18T14:48:05.037627969Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=794.535µs grafana | logger=migrator t=2025-06-18T14:48:05.042400986Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-18T14:48:05.042691981Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=290.165µs grafana | logger=migrator t=2025-06-18T14:48:05.04611784Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-18T14:48:05.046489687Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=364.378µs grafana | logger=migrator t=2025-06-18T14:48:05.049746243Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-18T14:48:05.050503918Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=757.445µs grafana | logger=migrator t=2025-06-18T14:48:05.055353485Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-18T14:48:05.057493068Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.139143ms grafana | logger=migrator t=2025-06-18T14:48:05.060740204Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-18T14:48:05.062911967Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.170943ms grafana | logger=migrator t=2025-06-18T14:48:05.066266135Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-18T14:48:05.066994699Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=727.894µs grafana | logger=migrator t=2025-06-18T14:48:05.071496389Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-18T14:48:05.073903368Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.405999ms grafana | logger=migrator t=2025-06-18T14:48:05.078296116Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-18T14:48:05.080571042Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.270796ms grafana | logger=migrator t=2025-06-18T14:48:05.084152703Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-18T14:48:05.084705815Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=552.282µs grafana | logger=migrator t=2025-06-18T14:48:05.088431419Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-18T14:48:05.090788727Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.356408ms grafana | logger=migrator t=2025-06-18T14:48:05.096169025Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-18T14:48:05.097737917Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=1.568412ms grafana | logger=migrator t=2025-06-18T14:48:05.103215977Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-18T14:48:05.104197347Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=983.231µs grafana | logger=migrator t=2025-06-18T14:48:05.108156106Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-18T14:48:05.109714467Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.558061ms grafana | logger=migrator t=2025-06-18T14:48:05.113463192Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-18T14:48:05.115179147Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.714675ms grafana | logger=migrator t=2025-06-18T14:48:05.122034024Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-18T14:48:05.122856481Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=822.087µs grafana | logger=migrator t=2025-06-18T14:48:05.126249689Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-18T14:48:05.127008805Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=757.125µs grafana | logger=migrator t=2025-06-18T14:48:05.131167158Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-18T14:48:05.131883242Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=716.074µs grafana | logger=migrator t=2025-06-18T14:48:05.134972075Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-18T14:48:05.142870893Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.897918ms grafana | logger=migrator t=2025-06-18T14:48:05.146581348Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-18T14:48:05.147502917Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=922.629µs grafana | logger=migrator t=2025-06-18T14:48:05.151503886Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-18T14:48:05.152289942Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=785.876µs grafana | logger=migrator t=2025-06-18T14:48:05.155485266Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-18T14:48:05.156315843Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=830.057µs grafana | logger=migrator t=2025-06-18T14:48:05.160934466Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-18T14:48:05.161433736Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=499.27µs grafana | logger=migrator t=2025-06-18T14:48:05.169775254Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-18T14:48:05.174011779Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=4.235625ms grafana | logger=migrator t=2025-06-18T14:48:05.177458698Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-18T14:48:05.180845025Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=3.385137ms grafana | logger=migrator t=2025-06-18T14:48:05.184881547Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-18T14:48:05.184905197Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=24.34µs grafana | logger=migrator t=2025-06-18T14:48:05.188244845Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-18T14:48:05.188428968Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=184.263µs grafana | logger=migrator t=2025-06-18T14:48:05.255665099Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-18T14:48:05.259996116Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.334027ms grafana | logger=migrator t=2025-06-18T14:48:05.264916274Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-18T14:48:05.265107519Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=191.465µs grafana | logger=migrator t=2025-06-18T14:48:05.269516867Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-18T14:48:05.269711881Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=194.764µs grafana | logger=migrator t=2025-06-18T14:48:05.27319235Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-18T14:48:05.275639289Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.446289ms grafana | logger=migrator t=2025-06-18T14:48:05.279770972Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-18T14:48:05.279954726Z level=info msg="Migration successfully executed" id="Update uid value" duration=180.544µs grafana | logger=migrator t=2025-06-18T14:48:05.284353174Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-18T14:48:05.285140731Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=786.927µs grafana | logger=migrator t=2025-06-18T14:48:05.289362605Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-18T14:48:05.290207623Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=841.237µs grafana | logger=migrator t=2025-06-18T14:48:05.293270514Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-18T14:48:05.295881437Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.613032ms grafana | logger=migrator t=2025-06-18T14:48:05.298829286Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-18T14:48:05.301292224Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.462458ms grafana | logger=migrator t=2025-06-18T14:48:05.305441318Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-18T14:48:05.305459068Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=18.61µs grafana | logger=migrator t=2025-06-18T14:48:05.30998553Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-18T14:48:05.310704694Z level=info msg="Migration successfully executed" id="create api_key table" duration=718.804µs grafana | logger=migrator t=2025-06-18T14:48:05.314438709Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-18T14:48:05.315664474Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.229625ms grafana | logger=migrator t=2025-06-18T14:48:05.32047471Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-18T14:48:05.321221045Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=745.865µs grafana | logger=migrator t=2025-06-18T14:48:05.324507591Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-18T14:48:05.325279507Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=771.736µs grafana | logger=migrator t=2025-06-18T14:48:05.328394339Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-18T14:48:05.329125683Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=730.814µs grafana | logger=migrator t=2025-06-18T14:48:05.334372939Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-18T14:48:05.335180236Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=807.047µs grafana | logger=migrator t=2025-06-18T14:48:05.338426291Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-18T14:48:05.340034693Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.600402ms grafana | logger=migrator t=2025-06-18T14:48:05.343559484Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-18T14:48:05.353753349Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=10.194484ms grafana | logger=migrator t=2025-06-18T14:48:05.360920502Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-18T14:48:05.361641927Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=721.145µs grafana | logger=migrator t=2025-06-18T14:48:05.392332553Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-18T14:48:05.393971137Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.641753ms grafana | logger=migrator t=2025-06-18T14:48:05.399714422Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-18T14:48:05.400520967Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=807.415µs grafana | logger=migrator t=2025-06-18T14:48:05.40409655Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-18T14:48:05.405748392Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.651232ms grafana | logger=migrator t=2025-06-18T14:48:05.411481688Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-18T14:48:05.411895796Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=414.558µs grafana | logger=migrator t=2025-06-18T14:48:05.414816135Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-18T14:48:05.415453128Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=632.302µs grafana | logger=migrator t=2025-06-18T14:48:05.418706623Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-18T14:48:05.418731343Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=25.38µs grafana | logger=migrator t=2025-06-18T14:48:05.424752435Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-18T14:48:05.42751411Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.761045ms grafana | logger=migrator t=2025-06-18T14:48:05.432281296Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-18T14:48:05.434882698Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.600532ms grafana | logger=migrator t=2025-06-18T14:48:05.43797668Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-18T14:48:05.438174454Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=196.904µs grafana | logger=migrator t=2025-06-18T14:48:05.441728206Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-18T14:48:05.444349258Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.620312ms grafana | logger=migrator t=2025-06-18T14:48:05.450808078Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-18T14:48:05.453885729Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=3.076472ms grafana | logger=migrator t=2025-06-18T14:48:05.457492472Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-18T14:48:05.458261978Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=768.736µs grafana | logger=migrator t=2025-06-18T14:48:05.461550493Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-18T14:48:05.462178436Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=627.313µs grafana | logger=migrator t=2025-06-18T14:48:05.46737987Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-18T14:48:05.468415031Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.033591ms grafana | logger=migrator t=2025-06-18T14:48:05.47134761Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-18T14:48:05.47230059Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=952.36µs grafana | logger=migrator t=2025-06-18T14:48:05.475368401Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-18T14:48:05.476495694Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.126493ms grafana | logger=migrator t=2025-06-18T14:48:05.483249459Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-18T14:48:05.484020985Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=770.796µs grafana | logger=migrator t=2025-06-18T14:48:05.486648078Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-18T14:48:05.486666528Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=18.69µs grafana | logger=migrator t=2025-06-18T14:48:05.495126588Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-18T14:48:05.495356182Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=233.934µs grafana | logger=migrator t=2025-06-18T14:48:05.498114378Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-18T14:48:05.500427955Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.312587ms grafana | logger=migrator t=2025-06-18T14:48:05.507032087Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-18T14:48:05.509086288Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.053691ms grafana | logger=migrator t=2025-06-18T14:48:05.51267628Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-18T14:48:05.51269319Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=17.41µs grafana | logger=migrator t=2025-06-18T14:48:05.515717422Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-18T14:48:05.516425776Z level=info msg="Migration successfully executed" id="create quota table v1" duration=707.883µs grafana | logger=migrator t=2025-06-18T14:48:05.525799074Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-18T14:48:05.526490697Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=691.403µs grafana | logger=migrator t=2025-06-18T14:48:05.529270063Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-18T14:48:05.529290704Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=21.151µs grafana | logger=migrator t=2025-06-18T14:48:05.532220862Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-18T14:48:05.532958678Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=737.346µs grafana | logger=migrator t=2025-06-18T14:48:05.540274174Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-18T14:48:05.541767864Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.49319ms grafana | logger=migrator t=2025-06-18T14:48:05.548303866Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-18T14:48:05.554807207Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=6.499451ms grafana | logger=migrator t=2025-06-18T14:48:05.562493521Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-18T14:48:05.562541122Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=50.321µs grafana | logger=migrator t=2025-06-18T14:48:05.56893565Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-18T14:48:05.569298687Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=362.537µs grafana | logger=migrator t=2025-06-18T14:48:05.572263207Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-18T14:48:05.583210577Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=10.94621ms grafana | logger=migrator t=2025-06-18T14:48:05.587575715Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-18T14:48:05.588153676Z level=info msg="Migration successfully executed" id="create session table" duration=580.122µs grafana | logger=migrator t=2025-06-18T14:48:05.592644427Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-18T14:48:05.59281291Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=167.673µs grafana | logger=migrator t=2025-06-18T14:48:05.597738519Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-18T14:48:05.597947763Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=213.385µs grafana | logger=migrator t=2025-06-18T14:48:05.601713188Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-18T14:48:05.602714619Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.000901ms grafana | logger=migrator t=2025-06-18T14:48:05.606353021Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-18T14:48:05.607165658Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=811.627µs grafana | logger=migrator t=2025-06-18T14:48:05.610488054Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-18T14:48:05.610516265Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=28.931µs grafana | logger=migrator t=2025-06-18T14:48:05.614678129Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-18T14:48:05.614714969Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=37.5µs grafana | logger=migrator t=2025-06-18T14:48:05.62022596Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-18T14:48:05.624619018Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.396458ms grafana | logger=migrator t=2025-06-18T14:48:05.628032917Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-18T14:48:05.632206941Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=4.165594ms grafana | logger=migrator t=2025-06-18T14:48:05.637378365Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-18T14:48:05.637576209Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=206.404µs grafana | logger=migrator t=2025-06-18T14:48:05.643742663Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-18T14:48:05.643819794Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=78.611µs grafana | logger=migrator t=2025-06-18T14:48:05.647856475Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-18T14:48:05.648844015Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=987.19µs grafana | logger=migrator t=2025-06-18T14:48:05.6530637Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-18T14:48:05.653088921Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=25.901µs grafana | logger=migrator t=2025-06-18T14:48:05.658558221Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-18T14:48:05.661901637Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.339897ms grafana | logger=migrator t=2025-06-18T14:48:05.664995429Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-18T14:48:05.665142272Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=148.913µs grafana | logger=migrator t=2025-06-18T14:48:05.668658163Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-18T14:48:05.673918589Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=5.256496ms grafana | logger=migrator t=2025-06-18T14:48:05.677419879Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-18T14:48:05.680871119Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.45072ms grafana | logger=migrator t=2025-06-18T14:48:05.685217846Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-18T14:48:05.685234716Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=16.29µs grafana | logger=migrator t=2025-06-18T14:48:05.688594773Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-18T14:48:05.689298148Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=702.985µs grafana | logger=migrator t=2025-06-18T14:48:05.692845018Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-18T14:48:05.69440413Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.558372ms grafana | logger=migrator t=2025-06-18T14:48:05.700315768Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-18T14:48:05.702341669Z level=info msg="Migration successfully executed" id="create alert table v1" duration=2.028041ms grafana | logger=migrator t=2025-06-18T14:48:05.706091385Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-18T14:48:05.70782752Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.736885ms grafana | logger=migrator t=2025-06-18T14:48:05.712728928Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-18T14:48:05.713396532Z level=info msg="Migration successfully executed" id="add index alert state" duration=667.604µs grafana | logger=migrator t=2025-06-18T14:48:05.716360271Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-18T14:48:05.717779659Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.419148ms grafana | logger=migrator t=2025-06-18T14:48:05.722392602Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-18T14:48:05.723125837Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=733.785µs grafana | logger=migrator t=2025-06-18T14:48:05.727899113Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-18T14:48:05.729510155Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.609832ms grafana | logger=migrator t=2025-06-18T14:48:05.733045386Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-18T14:48:05.734268261Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.222045ms grafana | logger=migrator t=2025-06-18T14:48:05.738882343Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-18T14:48:05.748720281Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.837288ms grafana | logger=migrator t=2025-06-18T14:48:05.830496833Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-18T14:48:05.831883501Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.386398ms grafana | logger=migrator t=2025-06-18T14:48:05.83678373Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-18T14:48:05.838158687Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.374187ms grafana | logger=migrator t=2025-06-18T14:48:05.845589696Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-18T14:48:05.845879762Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=287.266µs grafana | logger=migrator t=2025-06-18T14:48:05.849400353Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-18T14:48:05.850179198Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=778.275µs grafana | logger=migrator t=2025-06-18T14:48:05.854650169Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-18T14:48:05.856290041Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.643972ms grafana | logger=migrator t=2025-06-18T14:48:05.861799802Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-18T14:48:05.866235221Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.434959ms grafana | logger=migrator t=2025-06-18T14:48:05.86969072Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-18T14:48:05.873491236Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.799516ms grafana | logger=migrator t=2025-06-18T14:48:05.876836484Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-18T14:48:05.880569689Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.731895ms grafana | logger=migrator t=2025-06-18T14:48:05.887904896Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-18T14:48:05.891978968Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.073242ms grafana | logger=migrator t=2025-06-18T14:48:05.895090741Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-18T14:48:05.896195202Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.103761ms grafana | logger=migrator t=2025-06-18T14:48:05.899380287Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-18T14:48:05.899415058Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=30.931µs grafana | logger=migrator t=2025-06-18T14:48:05.902135423Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-18T14:48:05.902164303Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=28.9µs grafana | logger=migrator t=2025-06-18T14:48:05.907920869Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-18T14:48:05.908912398Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=990.689µs grafana | logger=migrator t=2025-06-18T14:48:05.913215985Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-18T14:48:05.915120023Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.903378ms grafana | logger=migrator t=2025-06-18T14:48:05.919131374Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-18T14:48:05.920361518Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.201144ms grafana | logger=migrator t=2025-06-18T14:48:05.926032272Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-18T14:48:05.926890999Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=858.157µs grafana | logger=migrator t=2025-06-18T14:48:05.956579605Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-18T14:48:05.958221649Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.641074ms grafana | logger=migrator t=2025-06-18T14:48:05.966659088Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-18T14:48:05.971035046Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.375928ms grafana | logger=migrator t=2025-06-18T14:48:05.974498175Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-18T14:48:05.978522096Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.022901ms grafana | logger=migrator t=2025-06-18T14:48:05.981559737Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-18T14:48:05.981851703Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=295.646µs grafana | logger=migrator t=2025-06-18T14:48:05.985212841Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-18T14:48:05.986241592Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.02807ms grafana | logger=migrator t=2025-06-18T14:48:05.990553278Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-18T14:48:05.992230752Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.675683ms grafana | logger=migrator t=2025-06-18T14:48:05.995828124Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-18T14:48:06.001363264Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.53529ms grafana | logger=migrator t=2025-06-18T14:48:06.008074028Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-18T14:48:06.008110209Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=45.011µs grafana | logger=migrator t=2025-06-18T14:48:06.011846134Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-18T14:48:06.012926585Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.080241ms grafana | logger=migrator t=2025-06-18T14:48:06.017615578Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-18T14:48:06.018363383Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=744.375µs grafana | logger=migrator t=2025-06-18T14:48:06.024021376Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-18T14:48:06.02421723Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=192.494µs grafana | logger=migrator t=2025-06-18T14:48:06.027321801Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-18T14:48:06.028451724Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.076992ms grafana | logger=migrator t=2025-06-18T14:48:06.034669537Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-18T14:48:06.03583424Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.160513ms grafana | logger=migrator t=2025-06-18T14:48:06.039022983Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-18T14:48:06.039947742Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=921.679µs grafana | logger=migrator t=2025-06-18T14:48:06.043242837Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-18T14:48:06.044174016Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=930.869µs grafana | logger=migrator t=2025-06-18T14:48:06.049530293Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-18T14:48:06.050523243Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=992.37µs grafana | logger=migrator t=2025-06-18T14:48:06.053962681Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-18T14:48:06.054936871Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=973.61µs grafana | logger=migrator t=2025-06-18T14:48:06.063522001Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-18T14:48:06.063580652Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=62.251µs grafana | logger=migrator t=2025-06-18T14:48:06.067357328Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-18T14:48:06.071657473Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.299535ms grafana | logger=migrator t=2025-06-18T14:48:06.076157332Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-18T14:48:06.076988238Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=830.516µs grafana | logger=migrator t=2025-06-18T14:48:06.08156121Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-18T14:48:06.085698532Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.137002ms grafana | logger=migrator t=2025-06-18T14:48:06.089152331Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-18T14:48:06.089837435Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=684.715µs grafana | logger=migrator t=2025-06-18T14:48:06.094005638Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-18T14:48:06.094901075Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=891.877µs grafana | logger=migrator t=2025-06-18T14:48:06.098134589Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-18T14:48:06.098943025Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=808.096µs grafana | logger=migrator t=2025-06-18T14:48:06.102210751Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-18T14:48:06.113982585Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.771124ms grafana | logger=migrator t=2025-06-18T14:48:06.118141117Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-18T14:48:06.118652207Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=510.89µs grafana | logger=migrator t=2025-06-18T14:48:06.121704647Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-18T14:48:06.122352341Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=646.744µs grafana | logger=migrator t=2025-06-18T14:48:06.127436942Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-18T14:48:06.127722438Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=285.456µs grafana | logger=migrator t=2025-06-18T14:48:06.133101925Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-18T14:48:06.133993602Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=891.547µs grafana | logger=migrator t=2025-06-18T14:48:06.138861779Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-18T14:48:06.139208076Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=352.687µs grafana | logger=migrator t=2025-06-18T14:48:06.143844078Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-18T14:48:06.148122453Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.280915ms grafana | logger=migrator t=2025-06-18T14:48:06.153452039Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-18T14:48:06.15753457Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.081731ms grafana | logger=migrator t=2025-06-18T14:48:06.160352826Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-18T14:48:06.161378197Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.024811ms grafana | logger=migrator t=2025-06-18T14:48:06.198326231Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-18T14:48:06.200213048Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.888917ms grafana | logger=migrator t=2025-06-18T14:48:06.207959923Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-18T14:48:06.208565974Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=605.231µs grafana | logger=migrator t=2025-06-18T14:48:06.212758088Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-18T14:48:06.218288398Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=5.533921ms grafana | logger=migrator t=2025-06-18T14:48:06.22287847Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-18T14:48:06.223599514Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=721.364µs grafana | logger=migrator t=2025-06-18T14:48:06.226776667Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-18T14:48:06.22692922Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=154.754µs grafana | logger=migrator t=2025-06-18T14:48:06.231607323Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-18T14:48:06.232219275Z level=info msg="Migration successfully executed" id="Move region to single row" duration=611.172µs grafana | logger=migrator t=2025-06-18T14:48:06.237119873Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-18T14:48:06.238434319Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.314327ms grafana | logger=migrator t=2025-06-18T14:48:06.243130062Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-18T14:48:06.244483729Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.352827ms grafana | logger=migrator t=2025-06-18T14:48:06.247892787Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-18T14:48:06.248860196Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=966.899µs grafana | logger=migrator t=2025-06-18T14:48:06.253435417Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-18T14:48:06.254414566Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=978.799µs grafana | logger=migrator t=2025-06-18T14:48:06.258690181Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-18T14:48:06.259550949Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=860.768µs grafana | logger=migrator t=2025-06-18T14:48:06.263459067Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-18T14:48:06.264836914Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.376917ms grafana | logger=migrator t=2025-06-18T14:48:06.270592228Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-18T14:48:06.27066624Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=79.572µs grafana | logger=migrator t=2025-06-18T14:48:06.273951314Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-18T14:48:06.273978385Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=28.101µs grafana | logger=migrator t=2025-06-18T14:48:06.279059207Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-18T14:48:06.279099147Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=44.331µs grafana | logger=migrator t=2025-06-18T14:48:06.283898172Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-18T14:48:06.284942573Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.044121ms grafana | logger=migrator t=2025-06-18T14:48:06.289410712Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-18T14:48:06.291674637Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=2.257345ms grafana | logger=migrator t=2025-06-18T14:48:06.298138495Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-18T14:48:06.299044014Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=906.109µs grafana | logger=migrator t=2025-06-18T14:48:06.302634696Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-18T14:48:06.303516202Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=880.667µs grafana | logger=migrator t=2025-06-18T14:48:06.31499679Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-18T14:48:06.315405309Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=410.249µs grafana | logger=migrator t=2025-06-18T14:48:06.318263306Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-18T14:48:06.318767386Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=503.43µs grafana | logger=migrator t=2025-06-18T14:48:06.321243705Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-18T14:48:06.321333537Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=90.562µs grafana | logger=migrator t=2025-06-18T14:48:06.326645803Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-18T14:48:06.33205151Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=5.400327ms grafana | logger=migrator t=2025-06-18T14:48:06.336169462Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-18T14:48:06.337140191Z level=info msg="Migration successfully executed" id="create team table" duration=970.359µs grafana | logger=migrator t=2025-06-18T14:48:06.342197042Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-18T14:48:06.343352475Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.155103ms grafana | logger=migrator t=2025-06-18T14:48:06.350588469Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-18T14:48:06.351696331Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.107522ms grafana | logger=migrator t=2025-06-18T14:48:06.356291852Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-18T14:48:06.361164568Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.871486ms grafana | logger=migrator t=2025-06-18T14:48:06.364587787Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-18T14:48:06.364914683Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=326.126µs grafana | logger=migrator t=2025-06-18T14:48:06.369017155Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-18T14:48:06.370101997Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.084282ms grafana | logger=migrator t=2025-06-18T14:48:06.40647428Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-18T14:48:06.412885477Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=6.412397ms grafana | logger=migrator t=2025-06-18T14:48:06.416507379Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-18T14:48:06.421284074Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.775665ms grafana | logger=migrator t=2025-06-18T14:48:06.424696012Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-18T14:48:06.42563379Z level=info msg="Migration successfully executed" id="create team member table" duration=937.038µs grafana | logger=migrator t=2025-06-18T14:48:06.429892916Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-18T14:48:06.431019747Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.126351ms grafana | logger=migrator t=2025-06-18T14:48:06.434568548Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-18T14:48:06.436001997Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.432449ms grafana | logger=migrator t=2025-06-18T14:48:06.441495566Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-18T14:48:06.442720731Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.224305ms grafana | logger=migrator t=2025-06-18T14:48:06.447289321Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-18T14:48:06.455295261Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=8.00455ms grafana | logger=migrator t=2025-06-18T14:48:06.463097385Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-18T14:48:06.466996133Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.900668ms grafana | logger=migrator t=2025-06-18T14:48:06.46989591Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-18T14:48:06.474843179Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.946609ms grafana | logger=migrator t=2025-06-18T14:48:06.478224496Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-18T14:48:06.479221616Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=993.34µs grafana | logger=migrator t=2025-06-18T14:48:06.484366048Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-18T14:48:06.485185265Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=815.107µs grafana | logger=migrator t=2025-06-18T14:48:06.488601862Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-18T14:48:06.490121063Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.518921ms grafana | logger=migrator t=2025-06-18T14:48:06.493749895Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-18T14:48:06.494730485Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=980.179µs grafana | logger=migrator t=2025-06-18T14:48:06.50003537Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-18T14:48:06.500917087Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=880.947µs grafana | logger=migrator t=2025-06-18T14:48:06.505577251Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-18T14:48:06.506532489Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=945.278µs grafana | logger=migrator t=2025-06-18T14:48:06.50963239Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-18T14:48:06.511162401Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.528311ms grafana | logger=migrator t=2025-06-18T14:48:06.515651371Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-18T14:48:06.517177371Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.525141ms grafana | logger=migrator t=2025-06-18T14:48:06.524220701Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-18T14:48:06.526161829Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.943028ms grafana | logger=migrator t=2025-06-18T14:48:06.529854003Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-18T14:48:06.53068216Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=827.366µs grafana | logger=migrator t=2025-06-18T14:48:06.53576336Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-18T14:48:06.536101087Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=335.956µs grafana | logger=migrator t=2025-06-18T14:48:06.540812111Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-18T14:48:06.541566885Z level=info msg="Migration successfully executed" id="create tag table" duration=754.234µs grafana | logger=migrator t=2025-06-18T14:48:06.545696048Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-18T14:48:06.547273669Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.577301ms grafana | logger=migrator t=2025-06-18T14:48:06.579608952Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-18T14:48:06.580867127Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.254715ms grafana | logger=migrator t=2025-06-18T14:48:06.584756684Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-18T14:48:06.585673813Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=898.359µs grafana | logger=migrator t=2025-06-18T14:48:06.590178302Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-18T14:48:06.591009709Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=834.177µs grafana | logger=migrator t=2025-06-18T14:48:06.59410708Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-18T14:48:06.606281303Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=12.165463ms grafana | logger=migrator t=2025-06-18T14:48:06.611919405Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-18T14:48:06.612928484Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.009179ms grafana | logger=migrator t=2025-06-18T14:48:06.61771675Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-18T14:48:06.618697669Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=980.109µs grafana | logger=migrator t=2025-06-18T14:48:06.62174206Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-18T14:48:06.622037676Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=295.326µs grafana | logger=migrator t=2025-06-18T14:48:06.625098867Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-18T14:48:06.626134328Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.03498ms grafana | logger=migrator t=2025-06-18T14:48:06.629449203Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-18T14:48:06.630779169Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.330156ms grafana | logger=migrator t=2025-06-18T14:48:06.667675642Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-18T14:48:06.669482229Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.806787ms grafana | logger=migrator t=2025-06-18T14:48:06.673317415Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-18T14:48:06.673340555Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=22.37µs grafana | logger=migrator t=2025-06-18T14:48:06.678021359Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-18T14:48:06.683494558Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.472489ms grafana | logger=migrator t=2025-06-18T14:48:06.689054508Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-18T14:48:06.694549058Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.4937ms grafana | logger=migrator t=2025-06-18T14:48:06.698392483Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-18T14:48:06.703973764Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.579791ms grafana | logger=migrator t=2025-06-18T14:48:06.708508105Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-18T14:48:06.713965353Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.456498ms grafana | logger=migrator t=2025-06-18T14:48:06.720729018Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-18T14:48:06.721880411Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.153933ms grafana | logger=migrator t=2025-06-18T14:48:06.725115304Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-18T14:48:06.73141804Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.301506ms grafana | logger=migrator t=2025-06-18T14:48:06.734770557Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-18T14:48:06.738727486Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=3.957029ms grafana | logger=migrator t=2025-06-18T14:48:06.742723375Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-18T14:48:06.743513241Z level=info msg="Migration successfully executed" id="create server_lock table" duration=789.316µs grafana | logger=migrator t=2025-06-18T14:48:06.751254275Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-18T14:48:06.753106171Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.851846ms grafana | logger=migrator t=2025-06-18T14:48:06.757246834Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-18T14:48:06.75902041Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.773426ms grafana | logger=migrator t=2025-06-18T14:48:06.764418916Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-18T14:48:06.765487667Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.067691ms grafana | logger=migrator t=2025-06-18T14:48:06.770395365Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-18T14:48:06.77215335Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.756885ms grafana | logger=migrator t=2025-06-18T14:48:06.775912655Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-18T14:48:06.777562898Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.649623ms grafana | logger=migrator t=2025-06-18T14:48:06.783598998Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-18T14:48:06.793182418Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=9.58205ms grafana | logger=migrator t=2025-06-18T14:48:06.797032755Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-18T14:48:06.798518795Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.48482ms grafana | logger=migrator t=2025-06-18T14:48:06.803012074Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-18T14:48:06.810492812Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=7.480808ms grafana | logger=migrator t=2025-06-18T14:48:06.817491141Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-18T14:48:06.819298058Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.805347ms grafana | logger=migrator t=2025-06-18T14:48:06.823168164Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-18T14:48:06.824263366Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.094812ms grafana | logger=migrator t=2025-06-18T14:48:06.82894655Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-18T14:48:06.829822267Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=874.877µs grafana | logger=migrator t=2025-06-18T14:48:06.833526261Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-18T14:48:06.834626093Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.099762ms grafana | logger=migrator t=2025-06-18T14:48:06.838319216Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-18T14:48:06.838344726Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=25.97µs grafana | logger=migrator t=2025-06-18T14:48:06.849822054Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-18T14:48:06.849926957Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=100.333µs grafana | logger=migrator t=2025-06-18T14:48:06.854938066Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-18T14:48:06.855906236Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=966.85µs grafana | logger=migrator t=2025-06-18T14:48:06.860241702Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-18T14:48:06.861198961Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=956.189µs grafana | logger=migrator t=2025-06-18T14:48:06.864579418Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-18T14:48:06.865597878Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.01728ms grafana | logger=migrator t=2025-06-18T14:48:06.871030627Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-18T14:48:06.871067367Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=37.851µs grafana | logger=migrator t=2025-06-18T14:48:06.874708729Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-18T14:48:06.876139838Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.431079ms grafana | logger=migrator t=2025-06-18T14:48:06.880662227Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-18T14:48:06.882190319Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.527982ms grafana | logger=migrator t=2025-06-18T14:48:06.887087245Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-18T14:48:06.888053925Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=966.58µs grafana | logger=migrator t=2025-06-18T14:48:06.891473082Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-18T14:48:06.892888841Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.414408ms grafana | logger=migrator t=2025-06-18T14:48:06.89785569Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-18T14:48:06.905555442Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=7.700172ms grafana | logger=migrator t=2025-06-18T14:48:06.910131284Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-18T14:48:06.911109793Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=978.559µs grafana | logger=migrator t=2025-06-18T14:48:06.915341807Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-18T14:48:06.915424639Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=83.032µs grafana | logger=migrator t=2025-06-18T14:48:06.919891168Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-18T14:48:06.920800615Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=908.927µs grafana | logger=migrator t=2025-06-18T14:48:06.924378537Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-18T14:48:06.925445177Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.06617ms grafana | logger=migrator t=2025-06-18T14:48:06.929097501Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-18T14:48:06.930262624Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.163573ms grafana | logger=migrator t=2025-06-18T14:48:06.971006694Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-18T14:48:06.971041165Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=37.441µs grafana | logger=migrator t=2025-06-18T14:48:06.978259288Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-18T14:48:06.979416321Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.160553ms grafana | logger=migrator t=2025-06-18T14:48:06.982952662Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-18T14:48:06.983974201Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.020919ms grafana | logger=migrator t=2025-06-18T14:48:06.988365949Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-18T14:48:06.989968301Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.602062ms grafana | logger=migrator t=2025-06-18T14:48:06.993218386Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-18T14:48:06.994299477Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.077091ms grafana | logger=migrator t=2025-06-18T14:48:06.99796469Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-18T14:48:07.003905588Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.939458ms grafana | logger=migrator t=2025-06-18T14:48:07.008267404Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-18T14:48:07.009332735Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.064941ms grafana | logger=migrator t=2025-06-18T14:48:07.012909345Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-18T14:48:07.013875685Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=965.35µs grafana | logger=migrator t=2025-06-18T14:48:07.020842122Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-18T14:48:07.043790874Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=22.948512ms grafana | logger=migrator t=2025-06-18T14:48:07.048383574Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-18T14:48:07.079087548Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=30.701364ms grafana | logger=migrator t=2025-06-18T14:48:07.086166857Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-18T14:48:07.086948743Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=780.936µs grafana | logger=migrator t=2025-06-18T14:48:07.091515923Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-18T14:48:07.092476862Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=957.079µs grafana | logger=migrator t=2025-06-18T14:48:07.097269486Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-18T14:48:07.104624121Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=7.347885ms grafana | logger=migrator t=2025-06-18T14:48:07.110499467Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-18T14:48:07.116722769Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.222872ms grafana | logger=migrator t=2025-06-18T14:48:07.121872651Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-18T14:48:07.123004054Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.130722ms grafana | logger=migrator t=2025-06-18T14:48:07.126570103Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-18T14:48:07.127665065Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.094732ms grafana | logger=migrator t=2025-06-18T14:48:07.132555511Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-18T14:48:07.133601041Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.04236ms grafana | logger=migrator t=2025-06-18T14:48:07.138152551Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-18T14:48:07.139171891Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.01847ms grafana | logger=migrator t=2025-06-18T14:48:07.14264576Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-18T14:48:07.1426643Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=18.94µs grafana | logger=migrator t=2025-06-18T14:48:07.146611367Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-18T14:48:07.155821259Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=9.210972ms grafana | logger=migrator t=2025-06-18T14:48:07.160181885Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-18T14:48:07.16453024Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.347715ms grafana | logger=migrator t=2025-06-18T14:48:07.168320325Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-18T14:48:07.173309952Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.990867ms grafana | logger=migrator t=2025-06-18T14:48:07.17825331Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-18T14:48:07.179024156Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=770.066µs grafana | logger=migrator t=2025-06-18T14:48:07.185934112Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-18T14:48:07.187934651Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=2.002489ms grafana | logger=migrator t=2025-06-18T14:48:07.193510081Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-18T14:48:07.200560629Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.049418ms grafana | logger=migrator t=2025-06-18T14:48:07.205251601Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-18T14:48:07.212986154Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=7.737113ms grafana | logger=migrator t=2025-06-18T14:48:07.217110926Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-18T14:48:07.217998762Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=887.576µs grafana | logger=migrator t=2025-06-18T14:48:07.224922719Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-18T14:48:07.233635471Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=8.711282ms grafana | logger=migrator t=2025-06-18T14:48:07.239991906Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-18T14:48:07.247004634Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=7.011178ms grafana | logger=migrator t=2025-06-18T14:48:07.250138035Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-18T14:48:07.250156865Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=19.57µs grafana | logger=migrator t=2025-06-18T14:48:07.254599944Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-18T14:48:07.255497491Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=896.817µs grafana | logger=migrator t=2025-06-18T14:48:07.258206484Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-18T14:48:07.259325976Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.118812ms grafana | logger=migrator t=2025-06-18T14:48:07.263363366Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-18T14:48:07.264949337Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.585291ms grafana | logger=migrator t=2025-06-18T14:48:07.270578068Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-18T14:48:07.270615858Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=22.02µs grafana | logger=migrator t=2025-06-18T14:48:07.273461575Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-18T14:48:07.280504803Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.042298ms grafana | logger=migrator t=2025-06-18T14:48:07.283622904Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-18T14:48:07.28844099Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.817766ms grafana | logger=migrator t=2025-06-18T14:48:07.293631131Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-18T14:48:07.304665109Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=11.026147ms grafana | logger=migrator t=2025-06-18T14:48:07.310410731Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-18T14:48:07.317350478Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.939557ms grafana | logger=migrator t=2025-06-18T14:48:07.360581489Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-18T14:48:07.369633108Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=9.051939ms grafana | logger=migrator t=2025-06-18T14:48:07.374095586Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-18T14:48:07.374120896Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=26.101µs grafana | logger=migrator t=2025-06-18T14:48:07.377298508Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-18T14:48:07.378146935Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=847.987µs grafana | logger=migrator t=2025-06-18T14:48:07.385046571Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-18T14:48:07.395527267Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=10.481996ms grafana | logger=migrator t=2025-06-18T14:48:07.398553037Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-18T14:48:07.398566407Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=13.91µs grafana | logger=migrator t=2025-06-18T14:48:07.40174171Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-18T14:48:07.406408911Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.667081ms grafana | logger=migrator t=2025-06-18T14:48:07.409505602Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-18T14:48:07.410534803Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.029431ms grafana | logger=migrator t=2025-06-18T14:48:07.414870188Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-18T14:48:07.421264434Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.392996ms grafana | logger=migrator t=2025-06-18T14:48:07.424528538Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-18T14:48:07.42515082Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=621.542µs grafana | logger=migrator t=2025-06-18T14:48:07.428535937Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-18T14:48:07.429271851Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=735.284µs grafana | logger=migrator t=2025-06-18T14:48:07.434701748Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-18T14:48:07.443998911Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=9.298353ms grafana | logger=migrator t=2025-06-18T14:48:07.448469249Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-18T14:48:07.449125412Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=656.103µs grafana | logger=migrator t=2025-06-18T14:48:07.453321755Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-18T14:48:07.45461006Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.281005ms grafana | logger=migrator t=2025-06-18T14:48:07.458918705Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-18T14:48:07.459840043Z level=info msg="Migration successfully executed" id="create alert_image table" duration=926.878µs grafana | logger=migrator t=2025-06-18T14:48:07.462823442Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-18T14:48:07.463825452Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.00121ms grafana | logger=migrator t=2025-06-18T14:48:07.468225918Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-18T14:48:07.468242898Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=18.14µs grafana | logger=migrator t=2025-06-18T14:48:07.473828518Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-18T14:48:07.474792888Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=963.879µs grafana | logger=migrator t=2025-06-18T14:48:07.478828807Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-18T14:48:07.479751795Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=925.608µs grafana | logger=migrator t=2025-06-18T14:48:07.482937218Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-18T14:48:07.483316986Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-18T14:48:07.489297573Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-18T14:48:07.489736441Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=438.888µs grafana | logger=migrator t=2025-06-18T14:48:07.493815242Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-18T14:48:07.495414104Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.598442ms grafana | logger=migrator t=2025-06-18T14:48:07.499247699Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-18T14:48:07.507558513Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.310935ms grafana | logger=migrator t=2025-06-18T14:48:07.512239317Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-18T14:48:07.513219956Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=980.299µs grafana | logger=migrator t=2025-06-18T14:48:07.516751977Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-18T14:48:07.517790208Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.037131ms grafana | logger=migrator t=2025-06-18T14:48:07.522764046Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-18T14:48:07.523589493Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=825.267µs grafana | logger=migrator t=2025-06-18T14:48:07.561786324Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-18T14:48:07.56308682Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.303216ms grafana | logger=migrator t=2025-06-18T14:48:07.567502138Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-18T14:48:07.568201893Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=699.075µs grafana | logger=migrator t=2025-06-18T14:48:07.571358435Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-18T14:48:07.571384206Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=24.871µs grafana | logger=migrator t=2025-06-18T14:48:07.578901076Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-18T14:48:07.578921116Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=20.43µs grafana | logger=migrator t=2025-06-18T14:48:07.583722532Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-18T14:48:07.594347694Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.601131ms grafana | logger=migrator t=2025-06-18T14:48:07.597424575Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-18T14:48:07.597812984Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=388.378µs grafana | logger=migrator t=2025-06-18T14:48:07.60118571Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-18T14:48:07.602314483Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.124183ms grafana | logger=migrator t=2025-06-18T14:48:07.608851504Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-18T14:48:07.60917716Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=327.006µs grafana | logger=migrator t=2025-06-18T14:48:07.612589528Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-18T14:48:07.61368447Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.067081ms grafana | logger=migrator t=2025-06-18T14:48:07.616952355Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-18T14:48:07.617821622Z level=info msg="Migration successfully executed" id="create secrets table" duration=869.057µs grafana | logger=migrator t=2025-06-18T14:48:07.622547756Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-18T14:48:07.66231813Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=39.767524ms grafana | logger=migrator t=2025-06-18T14:48:07.665772759Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-18T14:48:07.671453832Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.680923ms grafana | logger=migrator t=2025-06-18T14:48:07.675089355Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-18T14:48:07.675237868Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=148.473µs grafana | logger=migrator t=2025-06-18T14:48:07.678531263Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-18T14:48:07.717274166Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=38.732773ms grafana | logger=migrator t=2025-06-18T14:48:07.735316947Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-18T14:48:07.768275594Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=32.957347ms grafana | logger=migrator t=2025-06-18T14:48:07.771902036Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-18T14:48:07.772918907Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.016511ms grafana | logger=migrator t=2025-06-18T14:48:07.776877856Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-18T14:48:07.778019738Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.141552ms grafana | logger=migrator t=2025-06-18T14:48:07.784662871Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-18T14:48:07.784891095Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=228.284µs grafana | logger=migrator t=2025-06-18T14:48:07.790618939Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-18T14:48:07.791864564Z level=info msg="Migration successfully executed" id="create permission table" duration=1.244725ms grafana | logger=migrator t=2025-06-18T14:48:07.797495987Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-18T14:48:07.799086698Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.590241ms grafana | logger=migrator t=2025-06-18T14:48:07.803936606Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-18T14:48:07.805513317Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.576561ms grafana | logger=migrator t=2025-06-18T14:48:07.809539198Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-18T14:48:07.810544958Z level=info msg="Migration successfully executed" id="create role table" duration=1.00557ms grafana | logger=migrator t=2025-06-18T14:48:07.815829583Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-18T14:48:07.827344323Z level=info msg="Migration successfully executed" id="add column display_name" duration=11.51226ms grafana | logger=migrator t=2025-06-18T14:48:07.831839343Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-18T14:48:07.837099638Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.260076ms grafana | logger=migrator t=2025-06-18T14:48:07.841772341Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-18T14:48:07.842893613Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.121232ms grafana | logger=migrator t=2025-06-18T14:48:07.846690428Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-18T14:48:07.848366693Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.675474ms grafana | logger=migrator t=2025-06-18T14:48:07.853321731Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-18T14:48:07.855008634Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.683023ms grafana | logger=migrator t=2025-06-18T14:48:07.858730599Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-18T14:48:07.859684458Z level=info msg="Migration successfully executed" id="create team role table" duration=953.629µs grafana | logger=migrator t=2025-06-18T14:48:07.865259899Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-18T14:48:07.867616496Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=2.353127ms grafana | logger=migrator t=2025-06-18T14:48:07.874240158Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-18T14:48:07.875557584Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.316846ms grafana | logger=migrator t=2025-06-18T14:48:07.879299419Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-18T14:48:07.881432732Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=2.131133ms grafana | logger=migrator t=2025-06-18T14:48:07.885590785Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-18T14:48:07.88735956Z level=info msg="Migration successfully executed" id="create user role table" duration=1.766666ms grafana | logger=migrator t=2025-06-18T14:48:07.892493272Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-18T14:48:07.893713417Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.218955ms grafana | logger=migrator t=2025-06-18T14:48:07.898458391Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-18T14:48:07.900009763Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.550012ms grafana | logger=migrator t=2025-06-18T14:48:07.905243617Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-18T14:48:07.907307608Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=2.068281ms grafana | logger=migrator t=2025-06-18T14:48:07.911337759Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-18T14:48:07.912389919Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.05149ms grafana | logger=migrator t=2025-06-18T14:48:07.917594443Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-18T14:48:07.918797117Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.198214ms grafana | logger=migrator t=2025-06-18T14:48:07.922346548Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-18T14:48:07.92345493Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.108852ms grafana | logger=migrator t=2025-06-18T14:48:07.927921109Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-18T14:48:07.936150553Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.228824ms grafana | logger=migrator t=2025-06-18T14:48:07.941642043Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-18T14:48:07.943083172Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.452239ms grafana | logger=migrator t=2025-06-18T14:48:07.946651653Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-18T14:48:07.947847877Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.195284ms grafana | logger=migrator t=2025-06-18T14:48:07.951390647Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-18T14:48:07.95252217Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.130973ms grafana | logger=migrator t=2025-06-18T14:48:07.958681653Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-18T14:48:07.959818675Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.136092ms grafana | logger=migrator t=2025-06-18T14:48:07.963732994Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-18T14:48:07.964672343Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=938.479µs grafana | logger=migrator t=2025-06-18T14:48:07.968458578Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-18T14:48:07.969914567Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.455159ms grafana | logger=migrator t=2025-06-18T14:48:07.974607961Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-18T14:48:07.982647722Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.038621ms grafana | logger=migrator t=2025-06-18T14:48:07.987557469Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-18T14:48:07.995680702Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.121613ms grafana | logger=migrator t=2025-06-18T14:48:07.999459237Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-18T14:48:08.005398724Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.938787ms grafana | logger=migrator t=2025-06-18T14:48:08.010362472Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-18T14:48:08.018786745Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.422553ms grafana | logger=migrator t=2025-06-18T14:48:08.022747211Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-18T14:48:08.024002081Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.25479ms grafana | logger=migrator t=2025-06-18T14:48:08.027590368Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-18T14:48:08.028435428Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=844.52µs grafana | logger=migrator t=2025-06-18T14:48:08.035392945Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-18T14:48:08.037077126Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.683721ms grafana | logger=migrator t=2025-06-18T14:48:08.141156613Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-18T14:48:08.15268624Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=11.530168ms grafana | logger=migrator t=2025-06-18T14:48:08.264161894Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-18T14:48:08.26689082Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=2.727796ms grafana | logger=migrator t=2025-06-18T14:48:08.272348751Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-18T14:48:08.274303618Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.954767ms grafana | logger=migrator t=2025-06-18T14:48:08.277993007Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-18T14:48:08.278975661Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=982.174µs grafana | logger=migrator t=2025-06-18T14:48:08.282582698Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-18T14:48:08.283767697Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.187979ms grafana | logger=migrator t=2025-06-18T14:48:08.288153312Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-18T14:48:08.288174803Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=21.42µs grafana | logger=migrator t=2025-06-18T14:48:08.291794469Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-18T14:48:08.293151362Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.356133ms grafana | logger=migrator t=2025-06-18T14:48:08.297312082Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-18T14:48:08.297413984Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=102.632µs grafana | logger=migrator t=2025-06-18T14:48:08.302496107Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-18T14:48:08.303165533Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=669.436µs grafana | logger=migrator t=2025-06-18T14:48:08.306809961Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-18T14:48:08.307486878Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=677.147µs grafana | logger=migrator t=2025-06-18T14:48:08.311005922Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-18T14:48:08.311700909Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=694.237µs grafana | logger=migrator t=2025-06-18T14:48:08.315856989Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-18T14:48:08.316156966Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=299.477µs grafana | logger=migrator t=2025-06-18T14:48:08.322078619Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-18T14:48:08.323193786Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=1.115167ms grafana | logger=migrator t=2025-06-18T14:48:08.327183342Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-18T14:48:08.328735009Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.551947ms grafana | logger=migrator t=2025-06-18T14:48:08.33252753Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-18T14:48:08.333837631Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.309431ms grafana | logger=migrator t=2025-06-18T14:48:08.338954655Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-18T14:48:08.34953781Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=10.582395ms grafana | logger=migrator t=2025-06-18T14:48:08.353246469Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-18T14:48:08.353263209Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=16.63µs grafana | logger=migrator t=2025-06-18T14:48:08.356697343Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-18T14:48:08.357474631Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=778.729µs grafana | logger=migrator t=2025-06-18T14:48:08.361764124Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-18T14:48:08.363683561Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.917447ms grafana | logger=migrator t=2025-06-18T14:48:08.368911526Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-18T14:48:08.370917535Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=2.005269ms grafana | logger=migrator t=2025-06-18T14:48:08.376748655Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-18T14:48:08.389140594Z level=info msg="Migration successfully executed" id="add correlation config column" duration=12.428089ms grafana | logger=migrator t=2025-06-18T14:48:08.394469322Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-18T14:48:08.395669071Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.204879ms grafana | logger=migrator t=2025-06-18T14:48:08.400333083Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-18T14:48:08.401513352Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.179419ms grafana | logger=migrator t=2025-06-18T14:48:08.4072562Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-18T14:48:08.435898059Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=28.640639ms grafana | logger=migrator t=2025-06-18T14:48:08.439712601Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-18T14:48:08.440882589Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.170168ms grafana | logger=migrator t=2025-06-18T14:48:08.444133497Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-18T14:48:08.445304786Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.171139ms grafana | logger=migrator t=2025-06-18T14:48:08.451490054Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-18T14:48:08.453640887Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.147962ms grafana | logger=migrator t=2025-06-18T14:48:08.458823792Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-18T14:48:08.460093272Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.26964ms grafana | logger=migrator t=2025-06-18T14:48:08.464745734Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-18T14:48:08.465331378Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=583.374µs grafana | logger=migrator t=2025-06-18T14:48:08.469613291Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-18T14:48:08.471242451Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.63218ms grafana | logger=migrator t=2025-06-18T14:48:08.475577165Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-18T14:48:08.484586342Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.007617ms grafana | logger=migrator t=2025-06-18T14:48:08.489545961Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-18T14:48:08.496725984Z level=info msg="Migration successfully executed" id="add type column" duration=7.178443ms grafana | logger=migrator t=2025-06-18T14:48:08.500732781Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-18T14:48:08.502044502Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.311351ms grafana | logger=migrator t=2025-06-18T14:48:08.505977936Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-18T14:48:08.507135404Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.156768ms grafana | logger=migrator t=2025-06-18T14:48:08.513014986Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-18T14:48:08.51359154Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-18T14:48:08.517005812Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-18T14:48:08.517592297Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-18T14:48:08.521179803Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-18T14:48:08.522111935Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=931.842µs grafana | logger=migrator t=2025-06-18T14:48:08.526608713Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-18T14:48:08.528766926Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=2.156313ms grafana | logger=migrator t=2025-06-18T14:48:08.536090182Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-18T14:48:08.538764836Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.684334ms grafana | logger=migrator t=2025-06-18T14:48:08.545359565Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-18T14:48:08.547088437Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.728923ms grafana | logger=migrator t=2025-06-18T14:48:08.552306942Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-18T14:48:08.555011387Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=2.704255ms grafana | logger=migrator t=2025-06-18T14:48:08.558772348Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-18T14:48:08.559942566Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.170138ms grafana | logger=migrator t=2025-06-18T14:48:08.564430274Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-18T14:48:08.565338956Z level=info msg="Migration successfully executed" id="Drop public config table" duration=907.962µs grafana | logger=migrator t=2025-06-18T14:48:08.568900522Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-18T14:48:08.570532931Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.6319ms grafana | logger=migrator t=2025-06-18T14:48:08.574955268Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-18T14:48:08.576170346Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.213818ms grafana | logger=migrator t=2025-06-18T14:48:08.580992063Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-18T14:48:08.583272858Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.285905ms grafana | logger=migrator t=2025-06-18T14:48:08.586504866Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-18T14:48:08.587653094Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.147638ms grafana | logger=migrator t=2025-06-18T14:48:08.590641186Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-18T14:48:08.611996629Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=21.355394ms grafana | logger=migrator t=2025-06-18T14:48:08.650773153Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-18T14:48:08.661324277Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=10.554174ms grafana | logger=migrator t=2025-06-18T14:48:08.664783331Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-18T14:48:08.67099371Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.210299ms grafana | logger=migrator t=2025-06-18T14:48:08.675987261Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-18T14:48:08.676631906Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=646.816µs grafana | logger=migrator t=2025-06-18T14:48:08.682061646Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-18T14:48:08.691602086Z level=info msg="Migration successfully executed" id="add share column" duration=9.53488ms grafana | logger=migrator t=2025-06-18T14:48:08.697298754Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-18T14:48:08.697615151Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=315.658µs grafana | logger=migrator t=2025-06-18T14:48:08.701337291Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-18T14:48:08.70217276Z level=info msg="Migration successfully executed" id="create file table" duration=834.779µs grafana | logger=migrator t=2025-06-18T14:48:08.707064219Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-18T14:48:08.707895339Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=829.42µs grafana | logger=migrator t=2025-06-18T14:48:08.713957845Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-18T14:48:08.715749478Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.790463ms grafana | logger=migrator t=2025-06-18T14:48:08.719478158Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-18T14:48:08.720458941Z level=info msg="Migration successfully executed" id="create file_meta table" duration=980.443µs grafana | logger=migrator t=2025-06-18T14:48:08.7249876Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-18T14:48:08.726145398Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.156958ms grafana | logger=migrator t=2025-06-18T14:48:08.729442908Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-18T14:48:08.729463168Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=20.361µs grafana | logger=migrator t=2025-06-18T14:48:08.734290244Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-18T14:48:08.734320544Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=31.18µs grafana | logger=migrator t=2025-06-18T14:48:08.738073005Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-18T14:48:08.738922326Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=848.381µs grafana | logger=migrator t=2025-06-18T14:48:08.745103824Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-18T14:48:08.745410922Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=306.268µs grafana | logger=migrator t=2025-06-18T14:48:08.748709222Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-18T14:48:08.750189837Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.477015ms grafana | logger=migrator t=2025-06-18T14:48:08.753763003Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-18T14:48:08.762920754Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.158921ms grafana | logger=migrator t=2025-06-18T14:48:08.766655553Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-18T14:48:08.766816548Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=159.995µs grafana | logger=migrator t=2025-06-18T14:48:08.77149228Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-18T14:48:08.772918614Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.425784ms grafana | logger=migrator t=2025-06-18T14:48:08.776377907Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-18T14:48:08.776790168Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=411.551µs grafana | logger=migrator t=2025-06-18T14:48:08.7802301Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-18T14:48:08.780439115Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=208.765µs grafana | logger=migrator t=2025-06-18T14:48:08.78481603Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-18T14:48:08.785347704Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=531.144µs grafana | logger=migrator t=2025-06-18T14:48:08.793197582Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-18T14:48:08.804185477Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=10.991795ms grafana | logger=migrator t=2025-06-18T14:48:08.840147143Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-18T14:48:08.847477349Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.344346ms grafana | logger=migrator t=2025-06-18T14:48:08.850917193Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-18T14:48:08.851777993Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=859.93µs grafana | logger=migrator t=2025-06-18T14:48:08.856458336Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-18T14:48:08.935394306Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=78.93351ms grafana | logger=migrator t=2025-06-18T14:48:08.978380942Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-18T14:48:08.979690633Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.308701ms grafana | logger=migrator t=2025-06-18T14:48:08.985683317Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-18T14:48:08.987167463Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.483426ms grafana | logger=migrator t=2025-06-18T14:48:08.991697332Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-18T14:48:09.020134086Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.436524ms grafana | logger=migrator t=2025-06-18T14:48:09.072071481Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-18T14:48:09.083898064Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=11.822413ms grafana | logger=migrator t=2025-06-18T14:48:09.08957309Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-18T14:48:09.089829306Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=255.136µs grafana | logger=migrator t=2025-06-18T14:48:09.093507705Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-18T14:48:09.093651008Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=142.683µs grafana | logger=migrator t=2025-06-18T14:48:09.100759108Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-18T14:48:09.101118497Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=359.389µs grafana | logger=migrator t=2025-06-18T14:48:09.107306045Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-18T14:48:09.107667464Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=350.348µs grafana | logger=migrator t=2025-06-18T14:48:09.113077674Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-18T14:48:09.113309829Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=231.585µs grafana | logger=migrator t=2025-06-18T14:48:09.116596537Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-18T14:48:09.117557961Z level=info msg="Migration successfully executed" id="create folder table" duration=960.384µs grafana | logger=migrator t=2025-06-18T14:48:09.122202452Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-18T14:48:09.123943824Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.740252ms grafana | logger=migrator t=2025-06-18T14:48:09.129441186Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-18T14:48:09.131250249Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.807943ms grafana | logger=migrator t=2025-06-18T14:48:09.141241469Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-18T14:48:09.141323991Z level=info msg="Migration successfully executed" id="Update folder title length" duration=92.863µs grafana | logger=migrator t=2025-06-18T14:48:09.144725663Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-18T14:48:09.146157127Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.430794ms grafana | logger=migrator t=2025-06-18T14:48:09.152461548Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-18T14:48:09.153606845Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.144527ms grafana | logger=migrator t=2025-06-18T14:48:09.161422052Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-18T14:48:09.163276017Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.853325ms grafana | logger=migrator t=2025-06-18T14:48:09.166771681Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-18T14:48:09.167204802Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=433.141µs grafana | logger=migrator t=2025-06-18T14:48:09.17172439Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-18T14:48:09.171992966Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=266.686µs grafana | logger=migrator t=2025-06-18T14:48:09.175225663Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-18T14:48:09.176931915Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.705562ms grafana | logger=migrator t=2025-06-18T14:48:09.181524375Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-18T14:48:09.182733344Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.209369ms grafana | logger=migrator t=2025-06-18T14:48:09.187704522Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-18T14:48:09.188720637Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.015755ms grafana | logger=migrator t=2025-06-18T14:48:09.192178349Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-18T14:48:09.19343777Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.25608ms grafana | logger=migrator t=2025-06-18T14:48:09.196681688Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-18T14:48:09.197904417Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.221959ms grafana | logger=migrator t=2025-06-18T14:48:09.202121999Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-18T14:48:09.203246475Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.124056ms grafana | logger=migrator t=2025-06-18T14:48:09.208118062Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-18T14:48:09.2092668Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.143108ms grafana | logger=migrator t=2025-06-18T14:48:09.214866034Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-18T14:48:09.216833461Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.966657ms grafana | logger=migrator t=2025-06-18T14:48:09.221465422Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-18T14:48:09.22264518Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.179709ms grafana | logger=migrator t=2025-06-18T14:48:09.22889528Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-18T14:48:09.230372316Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.476766ms grafana | logger=migrator t=2025-06-18T14:48:09.23516557Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-18T14:48:09.237042185Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.876235ms grafana | logger=migrator t=2025-06-18T14:48:09.242584168Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-18T14:48:09.243857398Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.2731ms grafana | logger=migrator t=2025-06-18T14:48:09.247146748Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-18T14:48:09.247486406Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=340.178µs grafana | logger=migrator t=2025-06-18T14:48:09.249991546Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-18T14:48:09.259586336Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.594ms grafana | logger=migrator t=2025-06-18T14:48:09.264666398Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-18T14:48:09.265226392Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=560.544µs grafana | logger=migrator t=2025-06-18T14:48:09.268311325Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-18T14:48:09.268327306Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=16.451µs grafana | logger=migrator t=2025-06-18T14:48:09.273008328Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-18T14:48:09.2747755Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.766492ms grafana | logger=migrator t=2025-06-18T14:48:09.280862756Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-18T14:48:09.280890317Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=28.421µs grafana | logger=migrator t=2025-06-18T14:48:09.284480112Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-18T14:48:09.28645941Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.969568ms grafana | logger=migrator t=2025-06-18T14:48:09.291517361Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-18T14:48:09.292614507Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.096526ms grafana | logger=migrator t=2025-06-18T14:48:09.29691149Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-18T14:48:09.298781726Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.869696ms grafana | logger=migrator t=2025-06-18T14:48:09.302693869Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-18T14:48:09.304469832Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.775193ms grafana | logger=migrator t=2025-06-18T14:48:09.30937873Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-18T14:48:09.310144088Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=765.639µs grafana | logger=migrator t=2025-06-18T14:48:09.315006024Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-18T14:48:09.315447625Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=442.361µs grafana | logger=migrator t=2025-06-18T14:48:09.320310132Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-18T14:48:09.321312446Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=1.001744ms grafana | logger=migrator t=2025-06-18T14:48:09.326984632Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-18T14:48:09.327883933Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=898.481µs grafana | logger=migrator t=2025-06-18T14:48:09.33320886Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-18T14:48:09.335052134Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.842044ms grafana | logger=migrator t=2025-06-18T14:48:09.340192248Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-18T14:48:09.349978333Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.785695ms grafana | logger=migrator t=2025-06-18T14:48:09.353868906Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-18T14:48:09.366876978Z level=info msg="Migration successfully executed" id="add region_slug column" duration=13.008671ms grafana | logger=migrator t=2025-06-18T14:48:09.370371972Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-18T14:48:09.380237599Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=9.857376ms grafana | logger=migrator t=2025-06-18T14:48:09.408598508Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-18T14:48:09.419336566Z level=info msg="Migration successfully executed" id="add migration uid column" duration=10.742658ms grafana | logger=migrator t=2025-06-18T14:48:09.425422321Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-18T14:48:09.425600236Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=178.695µs grafana | logger=migrator t=2025-06-18T14:48:09.472862829Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-18T14:48:09.4750304Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=2.170371ms grafana | logger=migrator t=2025-06-18T14:48:09.514433296Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-18T14:48:09.527165221Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=12.732265ms grafana | logger=migrator t=2025-06-18T14:48:09.569726991Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-18T14:48:09.570103109Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=377.799µs grafana | logger=migrator t=2025-06-18T14:48:09.616573874Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-18T14:48:09.619080514Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=2.50919ms grafana | logger=migrator t=2025-06-18T14:48:09.699470781Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-18T14:48:09.730059855Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=30.634625ms grafana | logger=migrator t=2025-06-18T14:48:09.735180377Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-18T14:48:09.736163461Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=982.494µs grafana | logger=migrator t=2025-06-18T14:48:09.740124166Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-18T14:48:09.741374846Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.2537ms grafana | logger=migrator t=2025-06-18T14:48:09.74447745Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-18T14:48:09.744873119Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=392.209µs grafana | logger=migrator t=2025-06-18T14:48:09.748955267Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-18T14:48:09.749859109Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=903.542µs grafana | logger=migrator t=2025-06-18T14:48:09.75326601Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-18T14:48:09.783385323Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=30.119603ms grafana | logger=migrator t=2025-06-18T14:48:09.786781784Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-18T14:48:09.787464151Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=682.257µs grafana | logger=migrator t=2025-06-18T14:48:09.793440304Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-18T14:48:09.794799097Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.358693ms grafana | logger=migrator t=2025-06-18T14:48:09.798859024Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-18T14:48:09.800176006Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=1.322402ms grafana | logger=migrator t=2025-06-18T14:48:09.804345656Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-18T14:48:09.806238421Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=1.891925ms grafana | logger=migrator t=2025-06-18T14:48:09.811871286Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-18T14:48:09.82245943Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=10.587324ms grafana | logger=migrator t=2025-06-18T14:48:09.831756432Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-18T14:48:09.842007039Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=10.247876ms grafana | logger=migrator t=2025-06-18T14:48:09.84791841Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-18T14:48:09.858045312Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=10.126292ms grafana | logger=migrator t=2025-06-18T14:48:09.861432414Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-18T14:48:09.870529992Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=9.094838ms grafana | logger=migrator t=2025-06-18T14:48:09.874716672Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-18T14:48:09.886503774Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=11.786552ms grafana | logger=migrator t=2025-06-18T14:48:09.909062335Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-18T14:48:09.922048687Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=12.982852ms grafana | logger=migrator t=2025-06-18T14:48:09.927526278Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-18T14:48:09.92843514Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=909.612µs grafana | logger=migrator t=2025-06-18T14:48:09.932053327Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-18T14:48:09.972308652Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=40.245155ms grafana | logger=migrator t=2025-06-18T14:48:09.992294411Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-18T14:48:10.004801181Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=12.50901ms grafana | logger=migrator t=2025-06-18T14:48:10.008408946Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-18T14:48:10.015883015Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=7.472499ms grafana | logger=migrator t=2025-06-18T14:48:10.020793992Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-18T14:48:10.031656711Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=10.859349ms grafana | logger=migrator t=2025-06-18T14:48:10.036925397Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-18T14:48:10.045623864Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=8.697687ms grafana | logger=migrator t=2025-06-18T14:48:10.048967084Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-18T14:48:10.048988104Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=21.79µs grafana | logger=migrator t=2025-06-18T14:48:10.052715923Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-18T14:48:10.052742194Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=26.911µs grafana | logger=migrator t=2025-06-18T14:48:10.059001544Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-18T14:48:10.0722745Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=13.274716ms grafana | logger=migrator t=2025-06-18T14:48:10.079597634Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-18T14:48:10.08988733Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=10.288706ms grafana | logger=migrator t=2025-06-18T14:48:10.093378443Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-18T14:48:10.093776362Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=394.279µs grafana | logger=migrator t=2025-06-18T14:48:10.097113412Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-18T14:48:10.097394489Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=281.187µs grafana | logger=migrator t=2025-06-18T14:48:10.101616309Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-18T14:48:10.113225316Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=11.606407ms grafana | logger=migrator t=2025-06-18T14:48:10.118454821Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-18T14:48:10.132573447Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=14.116756ms grafana | logger=migrator t=2025-06-18T14:48:10.156386945Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-18T14:48:10.168593666Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=12.208231ms grafana | logger=migrator t=2025-06-18T14:48:10.177194312Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-18T14:48:10.18553225Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=8.330058ms grafana | logger=migrator t=2025-06-18T14:48:10.190299484Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-18T14:48:10.190839217Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=539.583µs grafana | logger=migrator t=2025-06-18T14:48:10.194233728Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-18T14:48:10.203781425Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=9.546707ms grafana | logger=migrator t=2025-06-18T14:48:10.207180296Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-18T14:48:10.216852097Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=9.670811ms grafana | logger=migrator t=2025-06-18T14:48:10.222112572Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-18T14:48:10.222303767Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=189.545µs grafana | logger=migrator t=2025-06-18T14:48:10.22537585Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-18T14:48:10.22575679Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=380.6µs grafana | logger=migrator t=2025-06-18T14:48:10.233162486Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-18T14:48:10.234999059Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.836643ms grafana | logger=migrator t=2025-06-18T14:48:10.239933407Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-18T14:48:10.239952838Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=19.681µs grafana | logger=migrator t=2025-06-18T14:48:10.243156534Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-18T14:48:10.243176574Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=20.57µs grafana | logger=migrator t=2025-06-18T14:48:10.245863069Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-18T14:48:10.246454002Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=589.903µs grafana | logger=migrator t=2025-06-18T14:48:10.250245243Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-18T14:48:10.261292536Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=11.048303ms grafana | logger=migrator t=2025-06-18T14:48:10.292578512Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-18T14:48:10.305696736Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=13.117564ms grafana | logger=migrator t=2025-06-18T14:48:10.310709165Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-18T14:48:10.311694588Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=984.883µs grafana | logger=migrator t=2025-06-18T14:48:10.317925087Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-18T14:48:10.319801672Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.875695ms grafana | logger=migrator t=2025-06-18T14:48:10.327430603Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-18T14:48:10.338501587Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=11.073564ms grafana | logger=migrator t=2025-06-18T14:48:10.343370324Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-18T14:48:10.351565039Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=8.193244ms grafana | logger=migrator t=2025-06-18T14:48:10.355663626Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-18T14:48:10.355767449Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-18T14:48:10.356153899Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-18T14:48:10.35623277Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=559.684µs grafana | logger=migrator t=2025-06-18T14:48:10.36085761Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-18T14:48:10.361539107Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=680.417µs grafana | logger=migrator t=2025-06-18T14:48:10.368275287Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-18T14:48:10.370333567Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.0579ms grafana | logger=migrator t=2025-06-18T14:48:10.374545797Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-18T14:48:10.376012542Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.467205ms grafana | logger=migrator t=2025-06-18T14:48:10.429454526Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-18T14:48:10.431649909Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=2.194363ms grafana | logger=migrator t=2025-06-18T14:48:10.437303194Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-18T14:48:10.438677737Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.374053ms grafana | logger=migrator t=2025-06-18T14:48:10.442163589Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-18T14:48:10.452507586Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=10.343197ms grafana | logger=migrator t=2025-06-18T14:48:10.455806295Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-18T14:48:10.463333104Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=7.526279ms grafana | logger=migrator t=2025-06-18T14:48:10.467739719Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-18T14:48:10.48119379Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=13.448451ms grafana | logger=migrator t=2025-06-18T14:48:10.485063652Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-18T14:48:10.492704064Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=7.639782ms grafana | logger=migrator t=2025-06-18T14:48:10.499492656Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-18T14:48:10.499860034Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-18T14:48:10.499878435Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=388.159µs grafana | logger=migrator t=2025-06-18T14:48:10.504290131Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-18T14:48:10.505574751Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.28406ms grafana | logger=migrator t=2025-06-18T14:48:10.509160867Z level=info msg="migrations completed" performed=654 skipped=0 duration=6.176199857s grafana | logger=migrator t=2025-06-18T14:48:10.509868244Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-18T14:48:10.527493134Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-18T14:48:10.527955054Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-18T14:48:10.565709435Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-18T14:48:10.674838657Z level=info msg="Restored cache from database" duration=437.401µs grafana | logger=resource-migrator t=2025-06-18T14:48:10.685645084Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-18T14:48:10.685700606Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-18T14:48:10.696238818Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-18T14:48:10.697092808Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=853.78µs grafana | logger=resource-migrator t=2025-06-18T14:48:10.703640534Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-18T14:48:10.703683165Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=42.421µs grafana | logger=resource-migrator t=2025-06-18T14:48:10.7081101Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-18T14:48:10.708343456Z level=info msg="Migration successfully executed" id="drop table resource" duration=233.186µs grafana | logger=resource-migrator t=2025-06-18T14:48:10.713740035Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-18T14:48:10.714891202Z level=info msg="Migration successfully executed" id="create table resource" duration=1.147727ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.721742476Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-18T14:48:10.723025646Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.28273ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.727957043Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-18T14:48:10.728047186Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=91.002µs grafana | logger=resource-migrator t=2025-06-18T14:48:10.731944669Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-18T14:48:10.733903076Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.957227ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.741886316Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-18T14:48:10.743749921Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.863435ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.748193046Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-18T14:48:10.749424566Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.23281ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.75295419Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-18T14:48:10.753055252Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=100.633µs grafana | logger=resource-migrator t=2025-06-18T14:48:10.756492834Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-18T14:48:10.757940959Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.441755ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.763301317Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-18T14:48:10.766265757Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=2.96471ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.772445464Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-18T14:48:10.772560787Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=115.293µs grafana | logger=resource-migrator t=2025-06-18T14:48:10.776143663Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-18T14:48:10.777430734Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.28587ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.784192544Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-18T14:48:10.786440758Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=2.248574ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.796804725Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-18T14:48:10.798117856Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.312661ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.802098001Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-18T14:48:10.813188336Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=11.089484ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.81796576Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-18T14:48:10.828732666Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=10.766236ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.832035846Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-18T14:48:10.832957377Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=920.861µs grafana | logger=resource-migrator t=2025-06-18T14:48:10.836269707Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-18T14:48:10.837849384Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.577337ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.84271997Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-18T14:48:10.854717636Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=11.995106ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.858220839Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-18T14:48:10.868145826Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=9.923737ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.871450575Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-18T14:48:10.871475566Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-18T14:48:10.871954787Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=503.302µs grafana | logger=resource-migrator t=2025-06-18T14:48:10.875626295Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-18T14:48:10.87709386Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.467015ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.882260912Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-18T14:48:10.896515163Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=14.255091ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.899936114Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-18T14:48:10.900908308Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=971.414µs grafana | logger=resource-migrator t=2025-06-18T14:48:10.904242457Z level=info msg="migrations completed" performed=26 skipped=0 duration=208.058141ms grafana | logger=resource-migrator t=2025-06-18T14:48:10.904918433Z level=info msg="Unlocking database" grafana | t=2025-06-18T14:48:10.905144859Z level=info caller=logger.go:214 time=2025-06-18T14:48:10.905124738Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-18T14:48:10.918090807Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-18T14:48:10.956260607Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-18T14:48:10.956286277Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-18T14:48:10.956354389Z level=info msg="Plugins loaded" count=53 duration=38.264662ms grafana | logger=query_data t=2025-06-18T14:48:10.961273817Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-18T14:48:10.971360417Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-18T14:48:10.988977518Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-18T14:48:10.998459343Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-18T14:48:10.998593177Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-18T14:48:11.002217813Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=plugin.backgroundinstaller t=2025-06-18T14:48:11.002753936Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=ngalert.state.manager t=2025-06-18T14:48:11.007494808Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.state.manager t=2025-06-18T14:48:11.009589868Z level=info msg="State cache has been initialized" states=0 duration=2.093219ms grafana | logger=provisioning.datasources t=2025-06-18T14:48:11.009789253Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=ngalert.multiorg.alertmanager t=2025-06-18T14:48:11.010059119Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=ngalert.scheduler t=2025-06-18T14:48:11.010115491Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-18T14:48:11.010266084Z level=info msg=starting first_tick=2025-06-18T14:48:20Z grafana | logger=http.server t=2025-06-18T14:48:11.010849097Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=grafanaStorageLogger t=2025-06-18T14:48:11.011196366Z level=info msg="Storage starting" grafana | logger=sqlstore.transactions t=2025-06-18T14:48:11.026603911Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=plugins.update.checker t=2025-06-18T14:48:11.107160271Z level=info msg="Update check succeeded" duration=104.375244ms grafana | logger=grafana.update.checker t=2025-06-18T14:48:11.141348533Z level=info msg="Update check succeeded" duration=138.762632ms grafana | logger=provisioning.alerting t=2025-06-18T14:48:11.18171141Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-18T14:48:11.181755351Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-18T14:48:11.185831657Z level=info msg="starting to provision dashboards" grafana | logger=sqlstore.transactions t=2025-06-18T14:48:11.213267718Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=sqlstore.transactions t=2025-06-18T14:48:11.224646458Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 grafana | logger=sqlstore.transactions t=2025-06-18T14:48:11.239400198Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-18T14:48:11.240519104Z level=info msg="Patterns update finished" duration=169.329106ms grafana | logger=grafana-apiserver t=2025-06-18T14:48:11.419304424Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T14:48:11.419992201Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T14:48:11.423271549Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T14:48:11.424262712Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T14:48:11.425133702Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T14:48:11.425976903Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T14:48:11.427587971Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T14:48:11.429185139Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T14:48:11.43093461Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-18T14:48:11.493996185Z level=info msg="app registry initialized" grafana | logger=plugin.installer t=2025-06-18T14:48:11.990548532Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=provisioning.dashboard t=2025-06-18T14:48:12.059968172Z level=info msg="finished to provision dashboards" grafana | logger=installer.fs t=2025-06-18T14:48:12.14300576Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.18 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-18T14:48:12.175729002Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-18T14:48:12.175751512Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=1.172940964s grafana | logger=plugin.backgroundinstaller t=2025-06-18T14:48:12.175804194Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-18T14:48:13.398066386Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-18T14:48:13.453529128Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-18T14:48:13.469101313Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-18T14:48:13.469154504Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=1.2933336s grafana | logger=plugin.backgroundinstaller t=2025-06-18T14:48:13.469230226Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=plugin.installer t=2025-06-18T14:48:14.025510485Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-18T14:48:14.090939882Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-18T14:48:14.107860547Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-18T14:48:14.107915178Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=638.655542ms grafana | logger=plugin.backgroundinstaller t=2025-06-18T14:48:14.10797252Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=plugin.installer t=2025-06-18T14:48:14.534130956Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-18T14:48:14.591542016Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.2 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-18T14:48:14.609885184Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-18T14:48:14.609904754Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=501.900374ms grafana | logger=infra.usagestats t=2025-06-18T14:48:46.013323932Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-18 14:48:04,816] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,816] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,816] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,816] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,816] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,816] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,816] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,816] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,816] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,816] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,816] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,817] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,817] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,817] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,817] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,817] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,817] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,817] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,820] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:04,823] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-18 14:48:04,827] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-18 14:48:04,834] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 14:48:04,856] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 14:48:04,857] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 14:48:04,869] INFO Socket connection established, initiating session, client: /172.17.0.7:42698, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 14:48:04,898] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000002c4ca0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 14:48:05,016] INFO Session: 0x1000002c4ca0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:05,016] INFO EventThread shut down for session: 0x1000002c4ca0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-18 14:48:05,738] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-18 14:48:06,022] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-18 14:48:06,129] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-18 14:48:06,130] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-18 14:48:06,131] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-18 14:48:06,149] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-18 14:48:06,155] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,155] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,155] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,155] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,155] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,155] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,156] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,156] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,156] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,156] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,156] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,156] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,157] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,157] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,157] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,157] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,157] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,157] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,160] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@584f54e6 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 14:48:06,164] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-18 14:48:06,170] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 14:48:06,172] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-18 14:48:06,176] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 14:48:06,184] INFO Socket connection established, initiating session, client: /172.17.0.7:42700, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 14:48:06,206] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000002c4ca0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 14:48:06,213] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-18 14:48:06,578] INFO Cluster ID = TOiQfmCwSTSm8x2R5Lwn2Q (kafka.server.KafkaServer) kafka | [2025-06-18 14:48:06,583] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-18 14:48:06,639] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-18 14:48:06,670] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-18 14:48:06,672] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-18 14:48:06,672] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-18 14:48:06,674] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-18 14:48:06,711] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-18 14:48:06,714] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-18 14:48:06,728] INFO Loaded 0 logs in 16ms. (kafka.log.LogManager) kafka | [2025-06-18 14:48:06,728] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-18 14:48:06,730] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-18 14:48:06,741] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-18 14:48:06,787] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-18 14:48:06,801] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-18 14:48:06,812] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-18 14:48:06,857] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-18 14:48:07,223] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-18 14:48:07,227] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-18 14:48:07,253] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-18 14:48:07,254] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-18 14:48:07,254] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-18 14:48:07,259] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-18 14:48:07,264] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-18 14:48:07,291] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 14:48:07,293] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 14:48:07,295] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 14:48:07,305] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 14:48:07,309] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-18 14:48:07,333] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-18 14:48:07,368] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750258087347,1750258087347,1,0,0,72057605929435137,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-18 14:48:07,370] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-18 14:48:07,431] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-18 14:48:07,446] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 14:48:07,448] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 14:48:07,449] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 14:48:07,456] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-18 14:48:07,468] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:07,474] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,475] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:07,482] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,488] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-18 14:48:07,518] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,520] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-18 14:48:07,522] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-18 14:48:07,523] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,526] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,528] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-18 14:48:07,529] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,530] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-18 14:48:07,548] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,555] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,563] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-18 14:48:07,573] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-18 14:48:07,575] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,575] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,575] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,575] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,578] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,579] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,579] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,580] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-18 14:48:07,580] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,586] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 14:48:07,590] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-18 14:48:07,602] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-18 14:48:07,603] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-18 14:48:07,606] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-18 14:48:07,606] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-18 14:48:07,618] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-18 14:48:07,619] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-18 14:48:07,620] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-18 14:48:07,623] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-18 14:48:07,624] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,630] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-18 14:48:07,636] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-18 14:48:07,641] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,641] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,641] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,642] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,643] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,656] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:07,670] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-18 14:48:07,670] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-18 14:48:07,670] INFO Kafka startTimeMs: 1750258087661 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-18 14:48:07,673] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-18 14:48:07,716] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-18 14:48:07,774] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-18 14:48:07,789] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-18 14:48:12,659] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:12,659] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:39,765] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-18 14:48:39,766] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-18 14:48:39,827] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:39,877] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:39,905] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(BlDaxiBiSt2FMIySzyZgNA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(xCN-8w9fSoexV5DfEO7AnQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:39,907] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:39,909] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,909] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,909] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,909] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,909] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,909] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,909] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,910] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,910] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,910] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,910] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,910] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,910] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,910] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,910] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,911] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,911] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,911] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,911] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,911] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,911] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,911] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,911] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,912] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,912] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,912] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,912] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,912] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,912] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,912] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,912] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,912] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,912] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:39,914] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-18 14:48:39,919] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,919] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,919] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,919] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,921] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,921] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,921] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,921] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,921] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,921] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,921] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,921] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,921] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,921] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,922] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,922] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,922] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,922] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,922] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,922] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,922] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,922] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,922] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,922] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,922] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,922] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,923] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,923] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,923] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,923] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,923] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,923] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,923] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,923] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,923] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,923] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,923] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,923] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,923] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:39,924] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,251] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,252] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-18 14:48:40,255] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-18 14:48:40,256] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-18 14:48:40,257] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-18 14:48:40,258] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-18 14:48:40,262] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,264] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,265] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:40,266] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-18 14:48:40,272] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,280] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,281] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,282] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,282] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,282] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,282] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,282] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:40,320] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-18 14:48:40,320] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-18 14:48:40,320] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-18 14:48:40,320] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-18 14:48:40,320] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-18 14:48:40,320] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-18 14:48:40,321] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-18 14:48:40,321] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-18 14:48:40,321] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-18 14:48:40,321] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-18 14:48:40,321] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-18 14:48:40,321] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-18 14:48:40,321] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-18 14:48:40,321] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-18 14:48:40,321] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-18 14:48:40,321] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-18 14:48:40,321] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-18 14:48:40,321] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-18 14:48:40,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-18 14:48:40,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-18 14:48:40,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-18 14:48:40,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-18 14:48:40,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-18 14:48:40,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-18 14:48:40,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-18 14:48:40,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-18 14:48:40,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-18 14:48:40,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-18 14:48:40,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-18 14:48:40,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-18 14:48:40,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-18 14:48:40,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-18 14:48:40,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-18 14:48:40,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-18 14:48:40,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-18 14:48:40,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-18 14:48:40,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-18 14:48:40,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-18 14:48:40,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-18 14:48:40,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-18 14:48:40,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-18 14:48:40,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-18 14:48:40,324] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-18 14:48:40,324] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-18 14:48:40,324] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-18 14:48:40,324] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-18 14:48:40,324] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-18 14:48:40,324] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-18 14:48:40,324] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-18 14:48:40,324] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-18 14:48:40,324] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-18 14:48:40,326] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-18 14:48:40,327] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-18 14:48:40,378] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:40,389] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:40,392] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:40,393] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:40,394] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:40,505] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:40,506] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:40,506] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:40,506] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:40,507] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:40,557] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:40,559] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:40,559] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:40,559] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:40,560] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:40,689] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:40,691] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:40,692] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:40,692] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:40,692] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:40,774] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:40,776] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:40,776] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:40,776] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:40,776] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:40,877] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:40,878] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:40,878] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:40,879] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:40,879] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,172] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,174] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,174] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,174] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,174] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,235] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,236] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,236] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,236] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,236] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,322] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,323] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,323] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,323] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,323] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,398] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,399] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,399] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,399] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,399] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,512] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,513] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,513] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,513] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,513] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,540] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,541] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,541] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,541] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,541] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,562] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,563] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,563] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,564] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,564] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,596] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,599] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,599] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,599] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,600] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,621] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,622] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,622] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,622] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,622] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,674] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,675] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,675] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,675] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,675] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,739] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,741] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,741] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,741] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,741] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,804] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,805] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,805] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,806] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,806] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,837] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,838] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,838] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,838] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,838] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,870] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,870] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,871] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,871] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,871] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,894] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,895] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,895] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,895] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,895] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,935] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,935] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,935] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,935] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,936] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:41,982] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:41,983] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:41,983] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,983] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:41,983] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:42,021] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:42,021] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:42,021] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,022] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,022] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:42,079] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:42,080] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:42,080] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,080] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,080] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:42,120] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:42,130] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:42,130] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,130] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,130] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:42,185] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:42,186] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:42,186] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,186] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,186] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:42,255] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:42,257] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:42,257] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,257] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,258] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:42,348] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:42,349] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:42,349] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,349] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,349] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:42,379] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:42,380] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:42,380] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,380] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,381] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:42,415] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:42,416] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:42,416] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,416] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,416] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:42,521] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:42,523] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:42,523] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,523] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,523] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:42,608] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:42,609] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:42,609] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,609] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,609] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:42,711] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:42,712] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:42,713] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,713] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,713] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:42,848] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:42,848] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-18 14:48:42,849] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,849] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,849] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(BlDaxiBiSt2FMIySzyZgNA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:42,916] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:42,917] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:42,917] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,917] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:42,917] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,051] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,052] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,052] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,053] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,053] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,073] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,074] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,074] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,074] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,075] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,135] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,137] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,137] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,137] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,137] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,226] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,228] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,228] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,228] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,228] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,279] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,280] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,280] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,280] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,281] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,292] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,293] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,293] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,293] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,293] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,391] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,393] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,393] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,394] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,394] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,470] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,471] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,471] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,471] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,471] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,500] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,501] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,502] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,502] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,502] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,542] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,545] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,545] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,545] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,545] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,631] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,632] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,633] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,633] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,633] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,646] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,647] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,647] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,648] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,648] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,700] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,701] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,702] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,702] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,702] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,766] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,767] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,767] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,767] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,767] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,806] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:43,807] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 14:48:43,807] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,807] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:43,807] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(xCN-8w9fSoexV5DfEO7AnQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:43,817] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-18 14:48:43,817] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-18 14:48:43,817] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-18 14:48:43,817] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-18 14:48:43,817] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-18 14:48:43,817] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-18 14:48:43,817] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-18 14:48:43,818] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-18 14:48:43,818] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-18 14:48:43,818] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-18 14:48:43,818] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-18 14:48:43,818] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-18 14:48:43,818] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-18 14:48:43,819] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-18 14:48:43,819] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-18 14:48:43,819] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-18 14:48:43,819] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-18 14:48:43,819] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-18 14:48:43,819] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-18 14:48:43,820] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-18 14:48:43,820] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-18 14:48:43,820] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-18 14:48:43,820] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-18 14:48:43,820] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-18 14:48:43,820] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-18 14:48:43,821] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-18 14:48:43,821] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-18 14:48:43,821] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-18 14:48:43,821] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-18 14:48:43,821] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-18 14:48:43,821] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-18 14:48:43,822] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-18 14:48:43,822] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-18 14:48:43,822] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-18 14:48:43,822] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-18 14:48:43,822] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-18 14:48:43,822] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-18 14:48:43,822] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-18 14:48:43,823] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-18 14:48:43,823] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-18 14:48:43,823] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-18 14:48:43,823] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-18 14:48:43,823] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-18 14:48:43,823] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-18 14:48:43,824] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-18 14:48:43,824] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-18 14:48:43,824] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-18 14:48:43,824] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-18 14:48:43,824] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-18 14:48:43,824] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-18 14:48:43,825] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-18 14:48:43,829] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,830] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,831] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,831] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,832] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,832] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,832] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,832] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,832] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,832] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,832] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,832] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,832] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,832] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,832] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,833] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,833] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,833] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,833] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,833] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,833] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,833] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,833] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,833] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,833] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,833] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,834] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,834] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,834] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,834] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,834] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,834] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,834] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,834] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,834] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,834] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,834] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,838] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,838] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,839] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,839] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,839] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,839] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,839] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,839] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,839] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,839] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,839] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,839] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,839] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,839] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,840] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,840] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,840] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,840] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,840] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:43,840] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 9 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,843] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,843] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,844] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,844] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,844] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,844] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,844] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,844] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,844] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,844] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,844] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,844] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,848] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,848] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,848] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,848] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 14:48:43,848] INFO [Broker id=1] Finished LeaderAndIsr request in 3576ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-18 14:48:43,851] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=xCN-8w9fSoexV5DfEO7AnQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=BlDaxiBiSt2FMIySzyZgNA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-18 14:48:43,856] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,857] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,858] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 14:48:43,858] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-18 14:48:44,328] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-16847501-14d3-494a-be74-9a7be8b545a0 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:44,343] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-16847501-14d3-494a-be74-9a7be8b545a0 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-16847501-14d3-494a-be74-9a7be8b545a0) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:44,463] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 0c8432c9-f6c5-4d9a-960d-955a9a5fb422 in Empty state. Created a new member id consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2-09fd0c0d-8ca3-4609-8caa-03444e877900 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:44,466] INFO [GroupCoordinator 1]: Preparing to rebalance group 0c8432c9-f6c5-4d9a-960d-955a9a5fb422 in state PreparingRebalance with old generation 0 (__consumer_offsets-12) (reason: Adding new member consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2-09fd0c0d-8ca3-4609-8caa-03444e877900 with group instance id None; client reason: need to re-join with the given member-id: consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2-09fd0c0d-8ca3-4609-8caa-03444e877900) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:44,525] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group bdbdbc11-1218-46be-b848-46f0c21e23d0 in Empty state. Created a new member id consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3-345fd7bf-af42-4aa9-9810-54d1e861a3ad and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:44,528] INFO [GroupCoordinator 1]: Preparing to rebalance group bdbdbc11-1218-46be-b848-46f0c21e23d0 in state PreparingRebalance with old generation 0 (__consumer_offsets-28) (reason: Adding new member consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3-345fd7bf-af42-4aa9-9810-54d1e861a3ad with group instance id None; client reason: need to re-join with the given member-id: consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3-345fd7bf-af42-4aa9-9810-54d1e861a3ad) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:47,356] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:47,378] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-16847501-14d3-494a-be74-9a7be8b545a0 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:47,468] INFO [GroupCoordinator 1]: Stabilized group 0c8432c9-f6c5-4d9a-960d-955a9a5fb422 generation 1 (__consumer_offsets-12) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:47,484] INFO [GroupCoordinator 1]: Assignment received from leader consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2-09fd0c0d-8ca3-4609-8caa-03444e877900 for group 0c8432c9-f6c5-4d9a-960d-955a9a5fb422 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:47,530] INFO [GroupCoordinator 1]: Stabilized group bdbdbc11-1218-46be-b848-46f0c21e23d0 generation 1 (__consumer_offsets-28) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:47,534] INFO [GroupCoordinator 1]: Assignment received from leader consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3-345fd7bf-af42-4aa9-9810-54d1e861a3ad for group bdbdbc11-1218-46be-b848-46f0c21e23d0 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:48:49,719] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-18 14:48:49,765] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(AvnN8l-WSpuldR0kbqIMcA),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:49,765] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) kafka | [2025-06-18 14:48:49,765] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 14:48:49,765] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-18 14:48:49,765] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 14:48:49,765] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-18 14:48:49,821] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 14:48:49,821] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) kafka | [2025-06-18 14:48:49,821] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-18 14:48:49,821] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-18 14:48:49,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 14:48:49,822] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-18 14:48:49,823] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-18 14:48:49,823] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 14:48:49,824] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-18 14:48:49,824] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-18 14:48:49,824] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-18 14:48:49,829] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 14:48:49,830] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-18 14:48:49,831] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:49,831] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 14:48:49,832] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(AvnN8l-WSpuldR0kbqIMcA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 14:48:49,967] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-18 14:48:49,969] INFO [Broker id=1] Finished LeaderAndIsr request in 146ms correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-18 14:48:49,970] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=AvnN8l-WSpuldR0kbqIMcA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-18 14:48:49,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-18 14:48:49,971] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-18 14:48:49,973] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-18 14:50:20,070] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-9bbac5e5-00aa-4335-adb6-e4b285202b16 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:50:20,071] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-9bbac5e5-00aa-4335-adb6-e4b285202b16 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:50:23,072] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:50:23,076] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-9bbac5e5-00aa-4335-adb6-e4b285202b16 for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:50:23,202] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-9bbac5e5-00aa-4335-adb6-e4b285202b16 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:50:23,202] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 14:50:23,204] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-9bbac5e5-00aa-4335-adb6-e4b285202b16, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.5, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.5:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-18T14:48:17.829+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-18T14:48:17.960+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 41 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-18T14:48:17.961+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-18T14:48:19.626+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-18T14:48:19.828+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 189 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-18T14:48:20.574+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-18T14:48:20.588+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-18T14:48:20.590+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-18T14:48:20.590+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-18T14:48:20.632+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-18T14:48:20.632+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2596 ms policy-api | [2025-06-18T14:48:20.982+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-18T14:48:21.069+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-18T14:48:21.113+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-18T14:48:21.495+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-18T14:48:21.537+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-18T14:48:21.754+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@1ab21633 policy-api | [2025-06-18T14:48:21.757+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-18T14:48:21.849+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-18T14:48:23.993+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-18T14:48:23.997+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-18T14:48:24.659+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-18T14:48:25.619+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-18T14:48:26.818+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-18T14:48:26.865+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-18T14:48:27.526+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-18T14:48:27.680+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-18T14:48:27.699+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-18T14:48:27.720+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.921 seconds (process running for 11.515) policy-api | [2025-06-18T14:48:39.925+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-18T14:48:39.926+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-18T14:48:39.927+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2025-06-18T14:49:55.560+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-6] ***** OrderedServiceImpl implementers: policy-api | [] policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | MakeTopics :: Creates the Policy topics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteXacmlPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | policy-csit | 4 tests, 4 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | policy-csit | 6 tests, 6 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.4) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:02.715668 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:02.764629 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:02.808292 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:02.852056 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:02.901486 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:02.947518 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.012488 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.064194 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.108899 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.158508 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.208823 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.263308 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.316087 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.35768 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.424878 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.4859 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.541435 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.586436 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.638145 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.691448 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.74224 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.846659 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.886603 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.935164 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:03.983639 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.059411 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.107911 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.160137 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.215352 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.269307 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.322867 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.377128 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.427837 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.476657 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.542044 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.662791 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.718172 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.775528 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.847057 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.910141 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:04.960524 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.04018 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.091512 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.137783 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.195521 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.305835 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.355242 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.418198 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.474084 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.531096 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.591823 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.643035 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.693624 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.747434 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.852671 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.90881 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:05.989555 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.047321 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.103263 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.163118 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.254071 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.317278 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.372283 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.432247 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.48585 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.546762 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.632918 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.714513 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.776748 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.839977 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.904431 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:06.996003 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.051438 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.101065 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.150437 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.207975 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.261195 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.317372 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.414318 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.47055 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.526019 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.614697 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.656056 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.705248 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.78519 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.838194 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.895338 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:07.952082 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:08.010007 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:08.064474 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:08.285632 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:08.341395 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:08.395855 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:08.446798 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:08.503112 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1806251448020800u | 1 | 2025-06-18 14:48:08.563668 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1806251448020900u | 1 | 2025-06-18 14:48:08.615896 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1806251448020900u | 1 | 2025-06-18 14:48:08.701243 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1806251448020900u | 1 | 2025-06-18 14:48:08.753169 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1806251448020900u | 1 | 2025-06-18 14:48:08.806001 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1806251448020900u | 1 | 2025-06-18 14:48:08.894859 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1806251448020900u | 1 | 2025-06-18 14:48:08.952243 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1806251448020900u | 1 | 2025-06-18 14:48:09.021156 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1806251448020900u | 1 | 2025-06-18 14:48:09.123988 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1806251448020900u | 1 | 2025-06-18 14:48:09.176578 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1806251448020900u | 1 | 2025-06-18 14:48:09.236754 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1806251448020900u | 1 | 2025-06-18 14:48:09.295642 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1806251448020900u | 1 | 2025-06-18 14:48:09.351635 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1806251448020900u | 1 | 2025-06-18 14:48:09.432153 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1806251448021000u | 1 | 2025-06-18 14:48:09.535383 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1806251448021000u | 1 | 2025-06-18 14:48:09.613891 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1806251448021000u | 1 | 2025-06-18 14:48:09.724824 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1806251448021000u | 1 | 2025-06-18 14:48:09.788115 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1806251448021000u | 1 | 2025-06-18 14:48:09.837303 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1806251448021000u | 1 | 2025-06-18 14:48:09.889934 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1806251448021000u | 1 | 2025-06-18 14:48:09.9616 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1806251448021000u | 1 | 2025-06-18 14:48:10.051185 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1806251448021000u | 1 | 2025-06-18 14:48:10.107703 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1806251448021100u | 1 | 2025-06-18 14:48:10.163375 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1806251448021200u | 1 | 2025-06-18 14:48:10.216667 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1806251448021200u | 1 | 2025-06-18 14:48:10.278396 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1806251448021200u | 1 | 2025-06-18 14:48:10.353499 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1806251448021200u | 1 | 2025-06-18 14:48:10.456299 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1806251448021300u | 1 | 2025-06-18 14:48:10.520284 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1806251448021300u | 1 | 2025-06-18 14:48:10.599752 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1806251448021300u | 1 | 2025-06-18 14:48:10.658739 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1806251448111400u | 1 | 2025-06-18 14:48:11.38131 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1806251448111400u | 1 | 2025-06-18 14:48:11.462237 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1806251448111400u | 1 | 2025-06-18 14:48:11.522177 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1806251448111400u | 1 | 2025-06-18 14:48:11.576194 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1806251448111400u | 1 | 2025-06-18 14:48:11.64406 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1806251448111400u | 1 | 2025-06-18 14:48:11.705772 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1806251448111400u | 1 | 2025-06-18 14:48:11.790689 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1806251448111400u | 1 | 2025-06-18 14:48:11.850043 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1806251448111400u | 1 | 2025-06-18 14:48:11.940218 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1806251448111400u | 1 | 2025-06-18 14:48:11.993449 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1806251448111400u | 1 | 2025-06-18 14:48:12.04803 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1806251448111400u | 1 | 2025-06-18 14:48:12.098752 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1806251448111400u | 1 | 2025-06-18 14:48:12.154776 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1806251448111500u | 1 | 2025-06-18 14:48:12.24823 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1806251448111500u | 1 | 2025-06-18 14:48:12.304301 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1806251448111500u | 1 | 2025-06-18 14:48:12.414348 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1806251448111500u | 1 | 2025-06-18 14:48:12.469753 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1806251448111500u | 1 | 2025-06-18 14:48:12.528198 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1806251448111500u | 1 | 2025-06-18 14:48:12.627562 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1806251448111500u | 1 | 2025-06-18 14:48:12.682039 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1806251448111500u | 1 | 2025-06-18 14:48:12.731834 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1806251448111600u | 1 | 2025-06-18 14:48:12.789061 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1806251448111600u | 1 | 2025-06-18 14:48:12.842772 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1806251448111601u | 1 | 2025-06-18 14:48:12.89802 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1806251448111601u | 1 | 2025-06-18 14:48:12.950296 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1806251448111700u | 1 | 2025-06-18 14:48:13.028641 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1806251448111700u | 1 | 2025-06-18 14:48:13.085536 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1806251448111700u | 1 | 2025-06-18 14:48:13.140253 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1806251448111701u | 1 | 2025-06-18 14:48:13.200703 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1806251448111701u | 1 | 2025-06-18 14:48:13.259515 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1806251448111701u | 1 | 2025-06-18 14:48:13.331078 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1806251448111701u | 1 | 2025-06-18 14:48:13.384557 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1806251448111701u | 1 | 2025-06-18 14:48:13.445469 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1806251448111701u | 1 | 2025-06-18 14:48:13.495513 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1806251448111701u | 1 | 2025-06-18 14:48:13.562784 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1806251448111701u | 1 | 2025-06-18 14:48:13.620189 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1806251448111701u | 1 | 2025-06-18 14:48:13.667242 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1806251448141600u | 1 | 2025-06-18 14:48:14.395994 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1806251448151600u | 1 | 2025-06-18 14:48:15.13196 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1806251448151600u | 1 | 2025-06-18 14:48:15.200837 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.6:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.7:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-18T14:48:30.140+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 63 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-18T14:48:30.142+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-18T14:48:31.650+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-18T14:48:31.745+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 81 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-18T14:48:32.701+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-18T14:48:32.713+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-18T14:48:32.715+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-18T14:48:32.715+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-18T14:48:32.770+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-18T14:48:32.771+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2564 ms policy-pap | [2025-06-18T14:48:33.196+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-18T14:48:33.275+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-18T14:48:33.321+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-18T14:48:33.717+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-18T14:48:33.763+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-18T14:48:33.979+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@296bfddb policy-pap | [2025-06-18T14:48:33.981+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-18T14:48:34.076+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-18T14:48:36.088+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-18T14:48:36.092+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-18T14:48:37.362+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = bdbdbc11-1218-46be-b848-46f0c21e23d0 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-18T14:48:37.418+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-18T14:48:37.566+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-18T14:48:37.566+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-18T14:48:37.566+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750258117565 policy-pap | [2025-06-18T14:48:37.568+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-1, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-18T14:48:37.569+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-18T14:48:37.570+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-18T14:48:37.577+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-18T14:48:37.577+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-18T14:48:37.577+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750258117577 policy-pap | [2025-06-18T14:48:37.577+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-18T14:48:37.929+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=xacml, supportedPolicyTypes=[onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0, onap.policies.monitoring.* 1.0.0, onap.policies.optimization.* 1.0.0, onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0, onap.policies.native.Xacml 1.0.0, onap.policies.Naming 1.0.0, onap.policies.match.* 1.0.0], policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-18T14:48:38.065+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-18T14:48:38.145+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-18T14:48:38.372+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-18T14:48:39.079+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-18T14:48:39.201+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-18T14:48:39.221+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-18T14:48:39.248+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-18T14:48:39.249+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-18T14:48:39.250+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-18T14:48:39.250+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-18T14:48:39.251+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-18T14:48:39.251+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-18T14:48:39.251+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-18T14:48:39.253+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bdbdbc11-1218-46be-b848-46f0c21e23d0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@29fccabd policy-pap | [2025-06-18T14:48:39.265+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bdbdbc11-1218-46be-b848-46f0c21e23d0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-18T14:48:39.265+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = bdbdbc11-1218-46be-b848-46f0c21e23d0 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-18T14:48:39.266+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-18T14:48:39.275+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-18T14:48:39.275+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-18T14:48:39.276+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750258119275 policy-pap | [2025-06-18T14:48:39.276+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-18T14:48:39.277+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-18T14:48:39.277+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=5f143312-c62c-4339-b811-8b14c704942c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1205d231 policy-pap | [2025-06-18T14:48:39.277+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=5f143312-c62c-4339-b811-8b14c704942c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-18T14:48:39.278+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-18T14:48:39.278+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-18T14:48:39.286+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-18T14:48:39.287+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-18T14:48:39.287+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750258119286 policy-pap | [2025-06-18T14:48:39.287+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-18T14:48:39.288+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-18T14:48:39.288+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=5f143312-c62c-4339-b811-8b14c704942c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-18T14:48:39.288+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bdbdbc11-1218-46be-b848-46f0c21e23d0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-18T14:48:39.288+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8160121e-caea-40e0-82a2-cc4fe9811d08, alive=false, publisher=null]]: starting policy-pap | [2025-06-18T14:48:39.301+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-18T14:48:39.302+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-18T14:48:39.316+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-18T14:48:39.334+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-18T14:48:39.334+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-18T14:48:39.334+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750258119333 policy-pap | [2025-06-18T14:48:39.334+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8160121e-caea-40e0-82a2-cc4fe9811d08, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-18T14:48:39.335+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0db2a225-9942-4c85-81ae-ad62510dc47a, alive=false, publisher=null]]: starting policy-pap | [2025-06-18T14:48:39.335+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-18T14:48:39.336+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-18T14:48:39.336+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-18T14:48:39.342+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-18T14:48:39.342+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-18T14:48:39.343+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750258119342 policy-pap | [2025-06-18T14:48:39.344+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0db2a225-9942-4c85-81ae-ad62510dc47a, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-18T14:48:39.344+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-18T14:48:39.344+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-18T14:48:39.346+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-18T14:48:39.349+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-18T14:48:39.351+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-18T14:48:39.351+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-18T14:48:39.351+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-18T14:48:39.351+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-18T14:48:39.352+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-18T14:48:39.353+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-18T14:48:39.353+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.041 seconds (process running for 10.629) policy-pap | [2025-06-18T14:48:39.378+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-18T14:48:39.754+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: TOiQfmCwSTSm8x2R5Lwn2Q policy-pap | [2025-06-18T14:48:39.754+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: TOiQfmCwSTSm8x2R5Lwn2Q policy-pap | [2025-06-18T14:48:39.755+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-18T14:48:39.755+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: TOiQfmCwSTSm8x2R5Lwn2Q policy-pap | [2025-06-18T14:48:39.880+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-18T14:48:39.884+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:39.885+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] Cluster ID: TOiQfmCwSTSm8x2R5Lwn2Q policy-pap | [2025-06-18T14:48:39.888+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-18T14:48:39.888+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-18T14:48:40.062+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:40.184+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:40.269+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:40.610+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:40.633+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:41.367+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:41.482+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] The metadata response from the cluster reported a recoverable issue with correlation id 13 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:41.604+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-18T14:48:41.604+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-18T14:48:41.607+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 3 ms policy-pap | [2025-06-18T14:48:42.311+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:42.494+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] The metadata response from the cluster reported a recoverable issue with correlation id 15 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:43.287+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:43.502+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] The metadata response from the cluster reported a recoverable issue with correlation id 17 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:44.301+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-18T14:48:44.307+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-18T14:48:44.334+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-16847501-14d3-494a-be74-9a7be8b545a0 policy-pap | [2025-06-18T14:48:44.334+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-18T14:48:44.516+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-18T14:48:44.519+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] (Re-)joining group policy-pap | [2025-06-18T14:48:44.526+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] Request joining group due to: need to re-join with the given member-id: consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3-345fd7bf-af42-4aa9-9810-54d1e861a3ad policy-pap | [2025-06-18T14:48:44.526+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] (Re-)joining group policy-pap | [2025-06-18T14:48:47.358+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-16847501-14d3-494a-be74-9a7be8b545a0', protocol='range'} policy-pap | [2025-06-18T14:48:47.366+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-16847501-14d3-494a-be74-9a7be8b545a0=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-18T14:48:47.393+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-16847501-14d3-494a-be74-9a7be8b545a0', protocol='range'} policy-pap | [2025-06-18T14:48:47.394+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-18T14:48:47.399+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-18T14:48:47.415+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-18T14:48:47.431+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-18T14:48:47.531+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] Successfully joined group with generation Generation{generationId=1, memberId='consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3-345fd7bf-af42-4aa9-9810-54d1e861a3ad', protocol='range'} policy-pap | [2025-06-18T14:48:47.531+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] Finished assignment for group at generation 1: {consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3-345fd7bf-af42-4aa9-9810-54d1e861a3ad=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-18T14:48:47.537+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] Successfully synced group in generation Generation{generationId=1, memberId='consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3-345fd7bf-af42-4aa9-9810-54d1e861a3ad', protocol='range'} policy-pap | [2025-06-18T14:48:47.538+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-18T14:48:47.538+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-18T14:48:47.540+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-18T14:48:47.541+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bdbdbc11-1218-46be-b848-46f0c21e23d0-3, groupId=bdbdbc11-1218-46be-b848-46f0c21e23d0] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-18T14:48:48.680+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-18T14:48:48.681+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"8f9aed24-103f-4980-b9f4-298246c6d4f1","timestampMs":1750258121272,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85"} policy-pap | [2025-06-18T14:48:48.686+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"8f9aed24-103f-4980-b9f4-298246c6d4f1","timestampMs":1750258121272,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85"} policy-pap | [2025-06-18T14:48:48.689+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK policy-pap | [2025-06-18T14:48:48.689+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_TOPIC_CHECK policy-pap | [2025-06-18T14:48:48.716+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"0663db10-5b6a-4baa-839c-e5a2973a7e9b","timestampMs":1750258128690,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup"} policy-pap | [2025-06-18T14:48:48.717+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"0663db10-5b6a-4baa-839c-e5a2973a7e9b","timestampMs":1750258128690,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup"} policy-pap | [2025-06-18T14:48:48.722+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-18T14:48:49.498+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting policy-pap | [2025-06-18T14:48:49.498+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting listener policy-pap | [2025-06-18T14:48:49.499+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting timer policy-pap | [2025-06-18T14:48:49.500+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=51825e25-c0a4-446b-ab79-8d23eda8e4d9, expireMs=1750258159500] policy-pap | [2025-06-18T14:48:49.502+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting enqueue policy-pap | [2025-06-18T14:48:49.503+00:00|INFO|TimerManager|Thread-9] update timer waiting 29997ms Timer [name=51825e25-c0a4-446b-ab79-8d23eda8e4d9, expireMs=1750258159500] policy-pap | [2025-06-18T14:48:49.503+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate started policy-pap | [2025-06-18T14:48:49.510+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"51825e25-c0a4-446b-ab79-8d23eda8e4d9","timestampMs":1750258129429,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:49.552+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"51825e25-c0a4-446b-ab79-8d23eda8e4d9","timestampMs":1750258129429,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:49.553+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T14:48:49.554+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"51825e25-c0a4-446b-ab79-8d23eda8e4d9","timestampMs":1750258129429,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:49.555+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T14:48:49.684+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"51825e25-c0a4-446b-ab79-8d23eda8e4d9","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"f73f8103-b599-432a-a35a-7971507febc2","timestampMs":1750258129672,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:49.685+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 51825e25-c0a4-446b-ab79-8d23eda8e4d9 policy-pap | [2025-06-18T14:48:49.689+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"51825e25-c0a4-446b-ab79-8d23eda8e4d9","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"f73f8103-b599-432a-a35a-7971507febc2","timestampMs":1750258129672,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:49.690+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping policy-pap | [2025-06-18T14:48:49.691+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping enqueue policy-pap | [2025-06-18T14:48:49.691+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping timer policy-pap | [2025-06-18T14:48:49.692+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=51825e25-c0a4-446b-ab79-8d23eda8e4d9, expireMs=1750258159500] policy-pap | [2025-06-18T14:48:49.692+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping listener policy-pap | [2025-06-18T14:48:49.692+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopped policy-pap | [2025-06-18T14:48:49.699+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"a48970dd-f262-4a36-ac56-c888ffba5a30","timestampMs":1750258129684,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:49.710+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate successful policy-pap | [2025-06-18T14:48:49.710+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 start publishing next request policy-pap | [2025-06-18T14:48:49.710+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpStateChange starting policy-pap | [2025-06-18T14:48:49.710+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpStateChange starting listener policy-pap | [2025-06-18T14:48:49.710+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpStateChange starting timer policy-pap | [2025-06-18T14:48:49.710+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.Naming","policy-type-version":"1.0.0","policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-18T14:48:49.710+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=07e7d842-2cd2-467d-b05a-cf0d92ed0835, expireMs=1750258159710] policy-pap | [2025-06-18T14:48:49.711+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpStateChange starting enqueue policy-pap | [2025-06-18T14:48:49.711+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=07e7d842-2cd2-467d-b05a-cf0d92ed0835, expireMs=1750258159710] policy-pap | [2025-06-18T14:48:49.711+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpStateChange started policy-pap | [2025-06-18T14:48:49.712+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"07e7d842-2cd2-467d-b05a-cf0d92ed0835","timestampMs":1750258129430,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:49.762+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:49.875+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 8 : {policy-notification=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T14:48:50.123+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"a48970dd-f262-4a36-ac56-c888ffba5a30","timestampMs":1750258129684,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:50.124+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-18T14:48:50.128+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"07e7d842-2cd2-467d-b05a-cf0d92ed0835","timestampMs":1750258129430,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:50.129+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-18T14:48:50.129+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"07e7d842-2cd2-467d-b05a-cf0d92ed0835","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"36c73ea5-dc92-47a2-bf70-8acfecccf90f","timestampMs":1750258129735,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpStateChange stopping policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpStateChange stopping enqueue policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpStateChange stopping timer policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=07e7d842-2cd2-467d-b05a-cf0d92ed0835, expireMs=1750258159710] policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpStateChange stopping listener policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpStateChange stopped policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpStateChange successful policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 start publishing next request policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting listener policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting timer policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=3801915a-f127-47c5-b352-a498d9389f4a, expireMs=1750258160408] policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting enqueue policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate started policy-pap | [2025-06-18T14:48:50.408+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"3801915a-f127-47c5-b352-a498d9389f4a","timestampMs":1750258130109,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:50.414+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"07e7d842-2cd2-467d-b05a-cf0d92ed0835","timestampMs":1750258129430,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:50.415+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-18T14:48:50.418+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"07e7d842-2cd2-467d-b05a-cf0d92ed0835","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"36c73ea5-dc92-47a2-bf70-8acfecccf90f","timestampMs":1750258129735,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:50.419+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 07e7d842-2cd2-467d-b05a-cf0d92ed0835 policy-pap | [2025-06-18T14:48:50.426+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"3801915a-f127-47c5-b352-a498d9389f4a","timestampMs":1750258130109,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:50.428+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T14:48:50.442+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"3801915a-f127-47c5-b352-a498d9389f4a","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"1a755ab2-772f-476d-894f-8e9ee6c5abb0","timestampMs":1750258130427,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:50.442+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"3801915a-f127-47c5-b352-a498d9389f4a","timestampMs":1750258130109,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:50.442+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 3801915a-f127-47c5-b352-a498d9389f4a policy-pap | [2025-06-18T14:48:50.443+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T14:48:50.449+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"3801915a-f127-47c5-b352-a498d9389f4a","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"1a755ab2-772f-476d-894f-8e9ee6c5abb0","timestampMs":1750258130427,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:48:50.449+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping policy-pap | [2025-06-18T14:48:50.449+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping enqueue policy-pap | [2025-06-18T14:48:50.449+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping timer policy-pap | [2025-06-18T14:48:50.449+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=3801915a-f127-47c5-b352-a498d9389f4a, expireMs=1750258160408] policy-pap | [2025-06-18T14:48:50.449+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping listener policy-pap | [2025-06-18T14:48:50.449+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopped policy-pap | [2025-06-18T14:48:50.456+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate successful policy-pap | [2025-06-18T14:48:50.456+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 has no more requests policy-pap | [2025-06-18T14:49:19.500+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=51825e25-c0a4-446b-ab79-8d23eda8e4d9, expireMs=1750258159500] policy-pap | [2025-06-18T14:49:19.710+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=07e7d842-2cd2-467d-b05a-cf0d92ed0835, expireMs=1750258159710] policy-pap | [2025-06-18T14:49:58.903+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup policy-pap | [2025-06-18T14:49:58.905+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-5] add policy onap.restart.tca 1.0.0 to subgroup defaultGroup xacml count=2 policy-pap | [2025-06-18T14:49:58.905+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-18T14:49:58.906+00:00|INFO|SessionData|http-nio-6969-exec-5] add update xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 defaultGroup xacml policies=1 policy-pap | [2025-06-18T14:49:58.907+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group defaultGroup policy-pap | [2025-06-18T14:49:58.953+00:00|INFO|SessionData|http-nio-6969-exec-5] use cached group defaultGroup policy-pap | [2025-06-18T14:49:58.953+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-5] add policy OSDF_CASABLANCA.Affinity_Default 1.0.0 to subgroup defaultGroup xacml count=3 policy-pap | [2025-06-18T14:49:58.953+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering a deploy for policy OSDF_CASABLANCA.Affinity_Default 1.0.0 policy-pap | [2025-06-18T14:49:58.953+00:00|INFO|SessionData|http-nio-6969-exec-5] add update xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 defaultGroup xacml policies=2 policy-pap | [2025-06-18T14:49:58.953+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group defaultGroup policy-pap | [2025-06-18T14:49:58.953+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group defaultGroup policy-pap | [2025-06-18T14:49:58.987+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-18T14:49:58Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=OSDF_CASABLANCA.Affinity_Default 1.0.0, action=DEPLOYMENT, timestamp=2025-06-18T14:49:58Z, user=policyadmin)] policy-pap | [2025-06-18T14:49:59.027+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting policy-pap | [2025-06-18T14:49:59.027+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting listener policy-pap | [2025-06-18T14:49:59.027+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting timer policy-pap | [2025-06-18T14:49:59.027+00:00|INFO|TimerManager|http-nio-6969-exec-5] update timer registered Timer [name=be26d6c0-daac-401d-9515-730726e32bcc, expireMs=1750258229027] policy-pap | [2025-06-18T14:49:59.027+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting enqueue policy-pap | [2025-06-18T14:49:59.027+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate started policy-pap | [2025-06-18T14:49:59.027+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=be26d6c0-daac-401d-9515-730726e32bcc, expireMs=1750258229027] policy-pap | [2025-06-18T14:49:59.028+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"be26d6c0-daac-401d-9515-730726e32bcc","timestampMs":1750258198953,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:49:59.037+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"be26d6c0-daac-401d-9515-730726e32bcc","timestampMs":1750258198953,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:49:59.039+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T14:49:59.046+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"be26d6c0-daac-401d-9515-730726e32bcc","timestampMs":1750258198953,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:49:59.046+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T14:49:59.612+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"be26d6c0-daac-401d-9515-730726e32bcc","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"77085c87-cdf1-4096-af1b-fdea37eebc95","timestampMs":1750258199606,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:49:59.614+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping policy-pap | [2025-06-18T14:49:59.614+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping enqueue policy-pap | [2025-06-18T14:49:59.614+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping timer policy-pap | [2025-06-18T14:49:59.614+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=be26d6c0-daac-401d-9515-730726e32bcc, expireMs=1750258229027] policy-pap | [2025-06-18T14:49:59.614+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping listener policy-pap | [2025-06-18T14:49:59.614+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopped policy-pap | [2025-06-18T14:49:59.617+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"be26d6c0-daac-401d-9515-730726e32bcc","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"77085c87-cdf1-4096-af1b-fdea37eebc95","timestampMs":1750258199606,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:49:59.617+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id be26d6c0-daac-401d-9515-730726e32bcc policy-pap | [2025-06-18T14:49:59.634+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate successful policy-pap | [2025-06-18T14:49:59.634+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 has no more requests policy-pap | [2025-06-18T14:49:59.635+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0},{"policy-type":"onap.policies.optimization.resource.AffinityPolicy","policy-type-version":"1.0.0","policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-18T14:50:23.698+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group defaultGroup policy-pap | [2025-06-18T14:50:23.699+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-3] remove policy onap.restart.tca 1.0.0 from subgroup defaultGroup xacml count=2 policy-pap | [2025-06-18T14:50:23.699+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-18T14:50:23.699+00:00|INFO|SessionData|http-nio-6969-exec-3] add update xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 defaultGroup xacml policies=0 policy-pap | [2025-06-18T14:50:23.699+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group defaultGroup policy-pap | [2025-06-18T14:50:23.699+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group defaultGroup policy-pap | [2025-06-18T14:50:23.726+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-18T14:50:23Z, user=policyadmin)] policy-pap | [2025-06-18T14:50:23.736+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting policy-pap | [2025-06-18T14:50:23.736+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting listener policy-pap | [2025-06-18T14:50:23.736+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting timer policy-pap | [2025-06-18T14:50:23.737+00:00|INFO|TimerManager|http-nio-6969-exec-3] update timer registered Timer [name=559d67e5-82e0-4cf4-8cb0-c97fb23bfe92, expireMs=1750258253737] policy-pap | [2025-06-18T14:50:23.737+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate starting enqueue policy-pap | [2025-06-18T14:50:23.737+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate started policy-pap | [2025-06-18T14:50:23.737+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"559d67e5-82e0-4cf4-8cb0-c97fb23bfe92","timestampMs":1750258223699,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:50:23.744+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"559d67e5-82e0-4cf4-8cb0-c97fb23bfe92","timestampMs":1750258223699,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:50:23.744+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T14:50:23.746+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"559d67e5-82e0-4cf4-8cb0-c97fb23bfe92","timestampMs":1750258223699,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:50:23.746+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T14:50:23.757+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"559d67e5-82e0-4cf4-8cb0-c97fb23bfe92","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"14712725-63cd-49d0-a53d-c488a21ded08","timestampMs":1750258223750,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:50:23.757+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 559d67e5-82e0-4cf4-8cb0-c97fb23bfe92 policy-pap | [2025-06-18T14:50:23.761+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"559d67e5-82e0-4cf4-8cb0-c97fb23bfe92","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"14712725-63cd-49d0-a53d-c488a21ded08","timestampMs":1750258223750,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:50:23.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping policy-pap | [2025-06-18T14:50:23.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping enqueue policy-pap | [2025-06-18T14:50:23.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping timer policy-pap | [2025-06-18T14:50:23.762+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=559d67e5-82e0-4cf4-8cb0-c97fb23bfe92, expireMs=1750258253737] policy-pap | [2025-06-18T14:50:23.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopping listener policy-pap | [2025-06-18T14:50:23.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate stopped policy-pap | [2025-06-18T14:50:23.775+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 PdpUpdate successful policy-pap | [2025-06-18T14:50:23.775+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85 has no more requests policy-pap | [2025-06-18T14:50:23.775+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-18T14:50:29.027+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=be26d6c0-daac-401d-9515-730726e32bcc, expireMs=1750258229027] policy-pap | [2025-06-18T14:50:39.353+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-18T14:50:49.707+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"99420b24-4d92-41be-9d27-c64abab4bdab","timestampMs":1750258249698,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:50:49.708+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"99420b24-4d92-41be-9d27-c64abab4bdab","timestampMs":1750258249698,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-18T14:50:49.708+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-xacml-pdp | Waiting for pap port 6969... policy-xacml-pdp | pap (172.17.0.8:6969) open policy-xacml-pdp | Waiting for kafka port 9092... policy-xacml-pdp | kafka (172.17.0.7:9092) open policy-xacml-pdp | + KEYSTORE=/opt/app/policy/pdpx/etc/ssl/policy-keystore policy-xacml-pdp | + TRUSTSTORE=/opt/app/policy/pdpx/etc/ssl/policy-truststore policy-xacml-pdp | + KEYSTORE_PASSWD=Pol1cy_0nap policy-xacml-pdp | + TRUSTSTORE_PASSWD=Pol1cy_0nap policy-xacml-pdp | + '[' 0 -ge 1 ] policy-xacml-pdp | + CONFIG_FILE= policy-xacml-pdp | + '[' -z ] policy-xacml-pdp | + CONFIG_FILE=/opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-truststore ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-keystore ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/xacml.properties ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/logback.xml ] policy-xacml-pdp | + echo 'Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json' policy-xacml-pdp | + /usr/lib/jvm/default-jvm/bin/java -cp '/opt/app/policy/pdpx/etc:/opt/app/policy/pdpx/lib/*' '-Dlogback.configurationFile=/opt/app/policy/pdpx/etc/logback.xml' '-Djavax.net.ssl.keyStore=/opt/app/policy/pdpx/etc/ssl/policy-keystore' '-Djavax.net.ssl.keyStorePassword=Pol1cy_0nap' '-Djavax.net.ssl.trustStore=/opt/app/policy/pdpx/etc/ssl/policy-truststore' '-Djavax.net.ssl.trustStorePassword=Pol1cy_0nap' org.onap.policy.pdpx.main.startstop.Main -c /opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | [2025-06-18T14:48:40.505+00:00|INFO|Main|main] Starting policy xacml pdp service with arguments - [-c, /opt/app/policy/pdpx/etc/defaultConfig.json] policy-xacml-pdp | [2025-06-18T14:48:40.599+00:00|INFO|XacmlPdpActivator|main] Activator initializing using org.onap.policy.pdpx.main.parameters.XacmlPdpParameterGroup@37858383 policy-xacml-pdp | [2025-06-18T14:48:40.651+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-xacml-pdp | allow.auto.create.topics = true policy-xacml-pdp | auto.commit.interval.ms = 5000 policy-xacml-pdp | auto.include.jmx.reporter = true policy-xacml-pdp | auto.offset.reset = latest policy-xacml-pdp | bootstrap.servers = [kafka:9092] policy-xacml-pdp | check.crcs = true policy-xacml-pdp | client.dns.lookup = use_all_dns_ips policy-xacml-pdp | client.id = consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-1 policy-xacml-pdp | client.rack = policy-xacml-pdp | connections.max.idle.ms = 540000 policy-xacml-pdp | default.api.timeout.ms = 60000 policy-xacml-pdp | enable.auto.commit = true policy-xacml-pdp | enable.metrics.push = true policy-xacml-pdp | exclude.internal.topics = true policy-xacml-pdp | fetch.max.bytes = 52428800 policy-xacml-pdp | fetch.max.wait.ms = 500 policy-xacml-pdp | fetch.min.bytes = 1 policy-xacml-pdp | group.id = 0c8432c9-f6c5-4d9a-960d-955a9a5fb422 policy-xacml-pdp | group.instance.id = null policy-xacml-pdp | group.protocol = classic policy-xacml-pdp | group.remote.assignor = null policy-xacml-pdp | heartbeat.interval.ms = 3000 policy-xacml-pdp | interceptor.classes = [] policy-xacml-pdp | internal.leave.group.on.close = true policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-xacml-pdp | isolation.level = read_uncommitted policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | max.partition.fetch.bytes = 1048576 policy-xacml-pdp | max.poll.interval.ms = 300000 policy-xacml-pdp | max.poll.records = 500 policy-xacml-pdp | metadata.max.age.ms = 300000 policy-xacml-pdp | metadata.recovery.strategy = none policy-xacml-pdp | metric.reporters = [] policy-xacml-pdp | metrics.num.samples = 2 policy-xacml-pdp | metrics.recording.level = INFO policy-xacml-pdp | metrics.sample.window.ms = 30000 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-xacml-pdp | receive.buffer.bytes = 65536 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 policy-xacml-pdp | reconnect.backoff.ms = 50 policy-xacml-pdp | request.timeout.ms = 30000 policy-xacml-pdp | retry.backoff.max.ms = 1000 policy-xacml-pdp | retry.backoff.ms = 100 policy-xacml-pdp | sasl.client.callback.handler.class = null policy-xacml-pdp | sasl.jaas.config = null policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | sasl.kerberos.service.name = null policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-xacml-pdp | sasl.login.callback.handler.class = null policy-xacml-pdp | sasl.login.class = null policy-xacml-pdp | sasl.login.connect.timeout.ms = null policy-xacml-pdp | sasl.login.read.timeout.ms = null policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 policy-xacml-pdp | sasl.mechanism = GSSAPI policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null policy-xacml-pdp | security.protocol = PLAINTEXT policy-xacml-pdp | security.providers = null policy-xacml-pdp | send.buffer.bytes = 131072 policy-xacml-pdp | session.timeout.ms = 45000 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 policy-xacml-pdp | ssl.cipher.suites = null policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-xacml-pdp | ssl.endpoint.identification.algorithm = https policy-xacml-pdp | ssl.engine.factory.class = null policy-xacml-pdp | ssl.key.password = null policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 policy-xacml-pdp | ssl.keystore.certificate.chain = null policy-xacml-pdp | ssl.keystore.key = null policy-xacml-pdp | ssl.keystore.location = null policy-xacml-pdp | ssl.keystore.password = null policy-xacml-pdp | ssl.keystore.type = JKS policy-xacml-pdp | ssl.protocol = TLSv1.3 policy-xacml-pdp | ssl.provider = null policy-xacml-pdp | ssl.secure.random.implementation = null policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX policy-xacml-pdp | ssl.truststore.certificates = null policy-xacml-pdp | ssl.truststore.location = null policy-xacml-pdp | ssl.truststore.password = null policy-xacml-pdp | ssl.truststore.type = JKS policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | policy-xacml-pdp | [2025-06-18T14:48:40.689+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-xacml-pdp | [2025-06-18T14:48:40.847+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-xacml-pdp | [2025-06-18T14:48:40.847+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-xacml-pdp | [2025-06-18T14:48:40.847+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750258120845 policy-xacml-pdp | [2025-06-18T14:48:40.850+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-1, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] Subscribed to topic(s): policy-pdp-pap policy-xacml-pdp | [2025-06-18T14:48:40.914+00:00|INFO|XacmlPdpApplicationManager|main] Initialization applications org.onap.policy.pdpx.main.parameters.XacmlApplicationParameters@7ec3394b JerseyClient(name=policyApiParameters, https=false, selfSignedCerts=false, hostname=policy-api, port=6969, basePath=null, userName=policyadmin, password=zb!XztG34, client=org.glassfish.jersey.client.JerseyClient@698122b2, baseUrl=http://policy-api:6969/, alive=true) policy-xacml-pdp | [2025-06-18T14:48:40.925+00:00|INFO|XacmlPdpApplicationManager|main] Application guard supports [onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0] policy-xacml-pdp | [2025-06-18T14:48:40.926+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath guard at this path /opt/app/policy/pdpx/apps/guard policy-xacml-pdp | [2025-06-18T14:48:40.926+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/guard policy-xacml-pdp | [2025-06-18T14:48:40.926+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/guard/xacml.properties policy-xacml-pdp | [2025-06-18T14:48:40.927+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-18T14:48:40.927+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.persistenceunit -> OperationsHistoryPU policy-xacml-pdp | [2025-06-18T14:48:40.927+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.name -> GetOperationOutcome policy-xacml-pdp | [2025-06-18T14:48:40.927+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-18T14:48:40.927+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.927+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-18T14:48:40.927+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides policy-xacml-pdp | [2025-06-18T14:48:40.927+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-18T14:48:40.927+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip policy-xacml-pdp | [2025-06-18T14:48:40.927+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.description -> Returns operation outcome policy-xacml-pdp | [2025-06-18T14:48:40.927+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.description -> Returns operation counts based on time window policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.password -> policy_user policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.issuer -> urn:org:onap:xacml:guard:get-operation-outcome policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.persistenceunit -> OperationsHistoryPU policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.driver -> org.postgresql.Driver policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.name -> CountRecentOperations policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.url -> jdbc:postgresql://postgres:5432/operationshistory policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.user -> policy_user policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.issuer -> urn:org:onap:xacml:guard:count-recent-operations policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] xacml.pip.engines -> count-recent-operations,get-operation-outcome policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip policy-xacml-pdp | [2025-06-18T14:48:40.928+00:00|INFO|StdXacmlApplicationServiceProvider|main] {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-18T14:48:40.930+00:00|WARN|XACMLProperties|main] Properties file /usr/lib/jvm/java-17-openjdk/lib/xacml.properties cannot be read. policy-xacml-pdp | [2025-06-18T14:48:40.960+00:00|INFO|XacmlPdpApplicationManager|main] Application optimization supports [onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0] policy-xacml-pdp | [2025-06-18T14:48:40.960+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath optimization at this path /opt/app/policy/pdpx/apps/optimization policy-xacml-pdp | [2025-06-18T14:48:40.960+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/optimization policy-xacml-pdp | [2025-06-18T14:48:40.960+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/optimization/xacml.properties policy-xacml-pdp | [2025-06-18T14:48:40.960+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-18T14:48:40.960+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-18T14:48:40.960+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-18T14:48:40.960+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-18T14:48:40.960+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.960+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-18T14:48:40.960+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-18T14:48:40.961+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-18T14:48:40.961+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-18T14:48:40.961+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.961+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.961+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-18T14:48:40.961+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-18T14:48:40.961+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-18T14:48:40.962+00:00|INFO|XacmlPdpApplicationManager|main] Application naming supports [onap.policies.Naming 1.0.0] policy-xacml-pdp | [2025-06-18T14:48:40.962+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath naming at this path /opt/app/policy/pdpx/apps/naming policy-xacml-pdp | [2025-06-18T14:48:40.962+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/naming policy-xacml-pdp | [2025-06-18T14:48:40.962+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/naming/xacml.properties policy-xacml-pdp | [2025-06-18T14:48:40.963+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-18T14:48:40.963+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-18T14:48:40.963+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-18T14:48:40.963+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-18T14:48:40.963+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.963+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-18T14:48:40.963+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-18T14:48:40.963+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-18T14:48:40.963+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-18T14:48:40.963+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.963+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.963+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-18T14:48:40.963+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-18T14:48:40.964+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPdpApplicationManager|main] Application native supports [onap.policies.native.Xacml 1.0.0, onap.policies.native.ToscaXacml 1.0.0] policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath native at this path /opt/app/policy/pdpx/apps/native policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/native policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/native/xacml.properties policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.966+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.967+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-18T14:48:40.967+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-18T14:48:40.967+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-18T14:48:40.968+00:00|INFO|XacmlPdpApplicationManager|main] Application match supports [onap.policies.Match 1.0.0] policy-xacml-pdp | [2025-06-18T14:48:40.968+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath match at this path /opt/app/policy/pdpx/apps/match policy-xacml-pdp | [2025-06-18T14:48:40.968+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/match policy-xacml-pdp | [2025-06-18T14:48:40.968+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/match/xacml.properties policy-xacml-pdp | [2025-06-18T14:48:40.968+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-18T14:48:40.968+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-18T14:48:40.968+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-18T14:48:40.968+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-18T14:48:40.968+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.968+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-18T14:48:40.968+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-18T14:48:40.968+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-18T14:48:40.968+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-18T14:48:40.969+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.969+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.969+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-18T14:48:40.969+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-18T14:48:40.969+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-18T14:48:40.970+00:00|INFO|XacmlPdpApplicationManager|main] Application monitoring supports [onap.Monitoring 1.0.0] policy-xacml-pdp | [2025-06-18T14:48:40.970+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath monitoring at this path /opt/app/policy/pdpx/apps/monitoring policy-xacml-pdp | [2025-06-18T14:48:40.970+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/monitoring policy-xacml-pdp | [2025-06-18T14:48:40.970+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/monitoring/xacml.properties policy-xacml-pdp | [2025-06-18T14:48:40.970+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-18T14:48:40.971+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-18T14:48:40.971+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-18T14:48:40.971+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-18T14:48:40.971+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.971+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-18T14:48:40.971+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-18T14:48:40.971+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-18T14:48:40.971+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-18T14:48:40.971+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.971+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-18T14:48:40.971+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-18T14:48:40.971+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-18T14:48:40.971+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-18T14:48:40.972+00:00|INFO|XacmlPdpApplicationManager|main] Finished applications initialization {optimize=org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplication@2b95e48b, native=org.onap.policy.xacml.pdp.application.nativ.NativePdpApplication@4a3329b9, guard=org.onap.policy.xacml.pdp.application.guard.GuardPdpApplication@3dddefd8, naming=org.onap.policy.xacml.pdp.application.naming.NamingPdpApplication@160ac7fb, match=org.onap.policy.xacml.pdp.application.match.MatchPdpApplication@12bfd80d, configure=org.onap.policy.xacml.pdp.application.monitoring.MonitoringPdpApplication@41925502} policy-xacml-pdp | [2025-06-18T14:48:40.994+00:00|INFO|XacmlPdpHearbeatPublisher|main] heartbeat topic probe 4000ms policy-xacml-pdp | [2025-06-18T14:48:41.193+00:00|INFO|ServiceManager|main] service manager starting policy-xacml-pdp | [2025-06-18T14:48:41.193+00:00|INFO|ServiceManager|main] service manager starting XACML PDP parameters policy-xacml-pdp | [2025-06-18T14:48:41.194+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-xacml-pdp | [2025-06-18T14:48:41.194+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0c8432c9-f6c5-4d9a-960d-955a9a5fb422, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5f574cc2 policy-xacml-pdp | [2025-06-18T14:48:41.211+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0c8432c9-f6c5-4d9a-960d-955a9a5fb422, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-xacml-pdp | [2025-06-18T14:48:41.212+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-xacml-pdp | allow.auto.create.topics = true policy-xacml-pdp | auto.commit.interval.ms = 5000 policy-xacml-pdp | auto.include.jmx.reporter = true policy-xacml-pdp | auto.offset.reset = latest policy-xacml-pdp | bootstrap.servers = [kafka:9092] policy-xacml-pdp | check.crcs = true policy-xacml-pdp | client.dns.lookup = use_all_dns_ips policy-xacml-pdp | client.id = consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2 policy-xacml-pdp | client.rack = policy-xacml-pdp | connections.max.idle.ms = 540000 policy-xacml-pdp | default.api.timeout.ms = 60000 policy-xacml-pdp | enable.auto.commit = true policy-xacml-pdp | enable.metrics.push = true policy-xacml-pdp | exclude.internal.topics = true policy-xacml-pdp | fetch.max.bytes = 52428800 policy-xacml-pdp | fetch.max.wait.ms = 500 policy-xacml-pdp | fetch.min.bytes = 1 policy-xacml-pdp | group.id = 0c8432c9-f6c5-4d9a-960d-955a9a5fb422 policy-xacml-pdp | group.instance.id = null policy-xacml-pdp | group.protocol = classic policy-xacml-pdp | group.remote.assignor = null policy-xacml-pdp | heartbeat.interval.ms = 3000 policy-xacml-pdp | interceptor.classes = [] policy-xacml-pdp | internal.leave.group.on.close = true policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-xacml-pdp | isolation.level = read_uncommitted policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | max.partition.fetch.bytes = 1048576 policy-xacml-pdp | max.poll.interval.ms = 300000 policy-xacml-pdp | max.poll.records = 500 policy-xacml-pdp | metadata.max.age.ms = 300000 policy-xacml-pdp | metadata.recovery.strategy = none policy-xacml-pdp | metric.reporters = [] policy-xacml-pdp | metrics.num.samples = 2 policy-xacml-pdp | metrics.recording.level = INFO policy-xacml-pdp | metrics.sample.window.ms = 30000 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-xacml-pdp | receive.buffer.bytes = 65536 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 policy-xacml-pdp | reconnect.backoff.ms = 50 policy-xacml-pdp | request.timeout.ms = 30000 policy-xacml-pdp | retry.backoff.max.ms = 1000 policy-xacml-pdp | retry.backoff.ms = 100 policy-xacml-pdp | sasl.client.callback.handler.class = null policy-xacml-pdp | sasl.jaas.config = null policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | sasl.kerberos.service.name = null policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-xacml-pdp | sasl.login.callback.handler.class = null policy-xacml-pdp | sasl.login.class = null policy-xacml-pdp | sasl.login.connect.timeout.ms = null policy-xacml-pdp | sasl.login.read.timeout.ms = null policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 policy-xacml-pdp | sasl.mechanism = GSSAPI policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null policy-xacml-pdp | security.protocol = PLAINTEXT policy-xacml-pdp | security.providers = null policy-xacml-pdp | send.buffer.bytes = 131072 policy-xacml-pdp | session.timeout.ms = 45000 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 policy-xacml-pdp | ssl.cipher.suites = null policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-xacml-pdp | ssl.endpoint.identification.algorithm = https policy-xacml-pdp | ssl.engine.factory.class = null policy-xacml-pdp | ssl.key.password = null policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 policy-xacml-pdp | ssl.keystore.certificate.chain = null policy-xacml-pdp | ssl.keystore.key = null policy-xacml-pdp | ssl.keystore.location = null policy-xacml-pdp | ssl.keystore.password = null policy-xacml-pdp | ssl.keystore.type = JKS policy-xacml-pdp | ssl.protocol = TLSv1.3 policy-xacml-pdp | ssl.provider = null policy-xacml-pdp | ssl.secure.random.implementation = null policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX policy-xacml-pdp | ssl.truststore.certificates = null policy-xacml-pdp | ssl.truststore.location = null policy-xacml-pdp | ssl.truststore.password = null policy-xacml-pdp | ssl.truststore.type = JKS policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | policy-xacml-pdp | [2025-06-18T14:48:41.212+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-xacml-pdp | [2025-06-18T14:48:41.225+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-xacml-pdp | [2025-06-18T14:48:41.225+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-xacml-pdp | [2025-06-18T14:48:41.225+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750258121225 policy-xacml-pdp | [2025-06-18T14:48:41.225+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] Subscribed to topic(s): policy-pdp-pap policy-xacml-pdp | [2025-06-18T14:48:41.226+00:00|INFO|ServiceManager|main] service manager starting topics policy-xacml-pdp | [2025-06-18T14:48:41.226+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0c8432c9-f6c5-4d9a-960d-955a9a5fb422, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-xacml-pdp | [2025-06-18T14:48:41.226+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=7cabd894-0712-4333-8b40-278259690aaf, alive=false, publisher=null]]: starting policy-xacml-pdp | [2025-06-18T14:48:41.238+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-xacml-pdp | acks = -1 policy-xacml-pdp | auto.include.jmx.reporter = true policy-xacml-pdp | batch.size = 16384 policy-xacml-pdp | bootstrap.servers = [kafka:9092] policy-xacml-pdp | buffer.memory = 33554432 policy-xacml-pdp | client.dns.lookup = use_all_dns_ips policy-xacml-pdp | client.id = producer-1 policy-xacml-pdp | compression.gzip.level = -1 policy-xacml-pdp | compression.lz4.level = 9 policy-xacml-pdp | compression.type = none policy-xacml-pdp | compression.zstd.level = 3 policy-xacml-pdp | connections.max.idle.ms = 540000 policy-xacml-pdp | delivery.timeout.ms = 120000 policy-xacml-pdp | enable.idempotence = true policy-xacml-pdp | enable.metrics.push = true policy-xacml-pdp | interceptor.classes = [] policy-xacml-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-xacml-pdp | linger.ms = 0 policy-xacml-pdp | max.block.ms = 60000 policy-xacml-pdp | max.in.flight.requests.per.connection = 5 policy-xacml-pdp | max.request.size = 1048576 policy-xacml-pdp | metadata.max.age.ms = 300000 policy-xacml-pdp | metadata.max.idle.ms = 300000 policy-xacml-pdp | metadata.recovery.strategy = none policy-xacml-pdp | metric.reporters = [] policy-xacml-pdp | metrics.num.samples = 2 policy-xacml-pdp | metrics.recording.level = INFO policy-xacml-pdp | metrics.sample.window.ms = 30000 policy-xacml-pdp | partitioner.adaptive.partitioning.enable = true policy-xacml-pdp | partitioner.availability.timeout.ms = 0 policy-xacml-pdp | partitioner.class = null policy-xacml-pdp | partitioner.ignore.keys = false policy-xacml-pdp | receive.buffer.bytes = 32768 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 policy-xacml-pdp | reconnect.backoff.ms = 50 policy-xacml-pdp | request.timeout.ms = 30000 policy-xacml-pdp | retries = 2147483647 policy-xacml-pdp | retry.backoff.max.ms = 1000 policy-xacml-pdp | retry.backoff.ms = 100 policy-xacml-pdp | sasl.client.callback.handler.class = null policy-xacml-pdp | sasl.jaas.config = null policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | sasl.kerberos.service.name = null policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-xacml-pdp | sasl.login.callback.handler.class = null policy-xacml-pdp | sasl.login.class = null policy-xacml-pdp | sasl.login.connect.timeout.ms = null policy-xacml-pdp | sasl.login.read.timeout.ms = null policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 policy-xacml-pdp | sasl.mechanism = GSSAPI policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null policy-xacml-pdp | security.protocol = PLAINTEXT policy-xacml-pdp | security.providers = null policy-xacml-pdp | send.buffer.bytes = 131072 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 policy-xacml-pdp | ssl.cipher.suites = null policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-xacml-pdp | ssl.endpoint.identification.algorithm = https policy-xacml-pdp | ssl.engine.factory.class = null policy-xacml-pdp | ssl.key.password = null policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 policy-xacml-pdp | ssl.keystore.certificate.chain = null policy-xacml-pdp | ssl.keystore.key = null policy-xacml-pdp | ssl.keystore.location = null policy-xacml-pdp | ssl.keystore.password = null policy-xacml-pdp | ssl.keystore.type = JKS policy-xacml-pdp | ssl.protocol = TLSv1.3 policy-xacml-pdp | ssl.provider = null policy-xacml-pdp | ssl.secure.random.implementation = null policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX policy-xacml-pdp | ssl.truststore.certificates = null policy-xacml-pdp | ssl.truststore.location = null policy-xacml-pdp | ssl.truststore.password = null policy-xacml-pdp | ssl.truststore.type = JKS policy-xacml-pdp | transaction.timeout.ms = 60000 policy-xacml-pdp | transactional.id = null policy-xacml-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-xacml-pdp | policy-xacml-pdp | [2025-06-18T14:48:41.239+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-xacml-pdp | [2025-06-18T14:48:41.249+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-xacml-pdp | [2025-06-18T14:48:41.269+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-xacml-pdp | [2025-06-18T14:48:41.269+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-xacml-pdp | [2025-06-18T14:48:41.269+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750258121269 policy-xacml-pdp | [2025-06-18T14:48:41.270+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=7cabd894-0712-4333-8b40-278259690aaf, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-xacml-pdp | [2025-06-18T14:48:41.270+00:00|INFO|ServiceManager|main] service manager starting Terminate PDP policy-xacml-pdp | [2025-06-18T14:48:41.270+00:00|INFO|ServiceManager|main] service manager starting Heartbeat Publisher policy-xacml-pdp | [2025-06-18T14:48:41.270+00:00|INFO|ServiceManager|main] service manager starting REST Server policy-xacml-pdp | [2025-06-18T14:48:41.270+00:00|INFO|ServiceManager|main] service manager starting policy-xacml-pdp | [2025-06-18T14:48:41.270+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-xacml-pdp | [2025-06-18T14:48:41.279+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0c8432c9-f6c5-4d9a-960d-955a9a5fb422, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: registering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007f785e2ad270@1d761c8f policy-xacml-pdp | [2025-06-18T14:48:41.279+00:00|INFO|SingleThreadedBusTopicSource|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0c8432c9-f6c5-4d9a-960d-955a9a5fb422, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=2]]]]: register: start not attempted policy-xacml-pdp | [2025-06-18T14:48:41.270+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING policy-xacml-pdp | [2025-06-18T14:48:41.281+00:00|INFO|ServiceManager|main] service manager started policy-xacml-pdp | [2025-06-18T14:48:41.281+00:00|INFO|ServiceManager|main] service manager started policy-xacml-pdp | [2025-06-18T14:48:41.281+00:00|INFO|Main|main] Started policy-xacml-pdp service successfully. policy-xacml-pdp | [2025-06-18T14:48:41.284+00:00|INFO|OrderedServiceImpl|pool-2-thread-1] ***** OrderedServiceImpl implementers: policy-xacml-pdp | [] policy-xacml-pdp | [2025-06-18T14:48:41.286+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"8f9aed24-103f-4980-b9f4-298246c6d4f1","timestampMs":1750258121272,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85"} policy-xacml-pdp | [2025-06-18T14:48:41.281+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN policy-xacml-pdp | [2025-06-18T14:48:41.789+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] The metadata response from the cluster reported a recoverable issue with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-xacml-pdp | [2025-06-18T14:48:41.789+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2025-06-18T14:48:41.790+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: TOiQfmCwSTSm8x2R5Lwn2Q policy-xacml-pdp | [2025-06-18T14:48:41.790+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] Cluster ID: TOiQfmCwSTSm8x2R5Lwn2Q policy-xacml-pdp | [2025-06-18T14:48:41.791+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-xacml-pdp | [2025-06-18T14:48:41.903+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2025-06-18T14:48:41.916+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2025-06-18T14:48:42.053+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-xacml-pdp | [2025-06-18T14:48:42.054+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-xacml-pdp | [2025-06-18T14:48:42.126+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2025-06-18T14:48:42.155+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] The metadata response from the cluster reported a recoverable issue with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2025-06-18T14:48:42.583+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2025-06-18T14:48:42.592+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2025-06-18T14:48:43.519+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] The metadata response from the cluster reported a recoverable issue with correlation id 13 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2025-06-18T14:48:43.558+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] The metadata response from the cluster reported a recoverable issue with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-xacml-pdp | [2025-06-18T14:48:44.444+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-xacml-pdp | [2025-06-18T14:48:44.450+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] (Re-)joining group policy-xacml-pdp | [2025-06-18T14:48:44.464+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] Request joining group due to: need to re-join with the given member-id: consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2-09fd0c0d-8ca3-4609-8caa-03444e877900 policy-xacml-pdp | [2025-06-18T14:48:44.464+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] (Re-)joining group policy-xacml-pdp | [2025-06-18T14:48:47.470+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] Successfully joined group with generation Generation{generationId=1, memberId='consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2-09fd0c0d-8ca3-4609-8caa-03444e877900', protocol='range'} policy-xacml-pdp | [2025-06-18T14:48:47.479+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] Finished assignment for group at generation 1: {consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2-09fd0c0d-8ca3-4609-8caa-03444e877900=Assignment(partitions=[policy-pdp-pap-0])} policy-xacml-pdp | [2025-06-18T14:48:47.488+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] Successfully synced group in generation Generation{generationId=1, memberId='consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2-09fd0c0d-8ca3-4609-8caa-03444e877900', protocol='range'} policy-xacml-pdp | [2025-06-18T14:48:47.488+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-xacml-pdp | [2025-06-18T14:48:47.490+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] Adding newly assigned partitions: policy-pdp-pap-0 policy-xacml-pdp | [2025-06-18T14:48:47.496+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] Found no committed offset for partition policy-pdp-pap-0 policy-xacml-pdp | [2025-06-18T14:48:47.502+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-0c8432c9-f6c5-4d9a-960d-955a9a5fb422-2, groupId=0c8432c9-f6c5-4d9a-960d-955a9a5fb422] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-xacml-pdp | [2025-06-18T14:48:48.640+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"8f9aed24-103f-4980-b9f4-298246c6d4f1","timestampMs":1750258121272,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85"} policy-xacml-pdp | [2025-06-18T14:48:48.677+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"8f9aed24-103f-4980-b9f4-298246c6d4f1","timestampMs":1750258121272,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85"} policy-xacml-pdp | [2025-06-18T14:48:48.681+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK policy-xacml-pdp | [2025-06-18T14:48:48.681+00:00|INFO|BidirectionalTopicClient|KAFKA-source-policy-pdp-pap] topic policy-pdp-pap is ready; found matching message PdpTopicCheck(super=PdpMessage(messageName=PDP_TOPIC_CHECK, requestId=8f9aed24-103f-4980-b9f4-298246c6d4f1, timestampMs=1750258121272, name=xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85, pdpGroup=null, pdpSubgroup=null)) policy-xacml-pdp | [2025-06-18T14:48:48.689+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=0c8432c9-f6c5-4d9a-960d-955a9a5fb422, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=1, locked=false, #topicListeners=2]]]]: unregistering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007f785e2ad270@1d761c8f policy-xacml-pdp | [2025-06-18T14:48:48.692+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=0663db10-5b6a-4baa-839c-e5a2973a7e9b, timestampMs=1750258128690, name=xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85, pdpGroup=defaultGroup, pdpSubgroup=null), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[], deploymentInstanceInfo=null, properties=null, response=null) policy-xacml-pdp | [2025-06-18T14:48:48.702+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"0663db10-5b6a-4baa-839c-e5a2973a7e9b","timestampMs":1750258128690,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup"} policy-xacml-pdp | [2025-06-18T14:48:48.716+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"0663db10-5b6a-4baa-839c-e5a2973a7e9b","timestampMs":1750258128690,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup"} policy-xacml-pdp | [2025-06-18T14:48:48.717+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-18T14:48:49.550+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"51825e25-c0a4-446b-ab79-8d23eda8e4d9","timestampMs":1750258129429,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:48:49.557+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=51825e25-c0a4-446b-ab79-8d23eda8e4d9, timestampMs=1750258129429, name=xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-4225fdb5-7079-4522-827b-e59cf4ff76ca, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.Naming, typeVersion=1.0.0, properties={policy-instance-name=ONAP_NF_NAMING_TIMESTAMP, naming-models=[{naming-type=VNF, naming-recipe=AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP, name-operation=to_lower_case(), naming-properties=[{property-name=AIC_CLOUD_REGION}, {property-name=CONSTANT, property-value=onap-nf}, {property-name=TIMESTAMP}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VNFC, naming-recipe=VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=ENTIRETY, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}, {property-name=NFC_NAMING_CODE}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VF-MODULE, naming-recipe=VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-value=-, property-name=DELIMITER}, {property-name=VF_MODULE_LABEL}, {property-name=VF_MODULE_TYPE}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=PRECEEDING, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}]}]}))], policiesToBeUndeployed=[]) policy-xacml-pdp | [2025-06-18T14:48:49.564+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP type: onap.policies.Naming weight: null policy: policy-xacml-pdp | {"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}} policy-xacml-pdp | [2025-06-18T14:48:49.655+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.Naming policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.Naming policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 1.0.0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.Naming policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-18T14:48:49.662+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/naming/xacml.properties policy-xacml-pdp | [2025-06-18T14:48:49.671+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP, policy-version=1.0.0} into application naming policy-xacml-pdp | [2025-06-18T14:48:49.672+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"51825e25-c0a4-446b-ab79-8d23eda8e4d9","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"f73f8103-b599-432a-a35a-7971507febc2","timestampMs":1750258129672,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:48:49.684+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=a48970dd-f262-4a36-ac56-c888ffba5a30, timestampMs=1750258129684, name=xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) policy-xacml-pdp | [2025-06-18T14:48:49.685+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"a48970dd-f262-4a36-ac56-c888ffba5a30","timestampMs":1750258129684,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:48:49.690+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"51825e25-c0a4-446b-ab79-8d23eda8e4d9","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"f73f8103-b599-432a-a35a-7971507febc2","timestampMs":1750258129672,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:48:49.691+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-18T14:48:49.701+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"a48970dd-f262-4a36-ac56-c888ffba5a30","timestampMs":1750258129684,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:48:49.701+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-18T14:48:49.732+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"07e7d842-2cd2-467d-b05a-cf0d92ed0835","timestampMs":1750258129430,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:48:49.733+00:00|INFO|XacmlPdpStateChangeListener|KAFKA-source-policy-pdp-pap] PDP State Change message has been received from the PAP - PdpStateChange(super=PdpMessage(messageName=PDP_STATE_CHANGE, requestId=07e7d842-2cd2-467d-b05a-cf0d92ed0835, timestampMs=1750258129430, name=xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-4225fdb5-7079-4522-827b-e59cf4ff76ca, state=ACTIVE) policy-xacml-pdp | [2025-06-18T14:48:49.734+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] set state of org.onap.policy.pdpx.main.XacmlState@5ee4ccf3 to ACTIVE policy-xacml-pdp | [2025-06-18T14:48:49.735+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] State change: ACTIVE - Starting rest controller policy-xacml-pdp | [2025-06-18T14:48:49.735+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"07e7d842-2cd2-467d-b05a-cf0d92ed0835","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"36c73ea5-dc92-47a2-bf70-8acfecccf90f","timestampMs":1750258129735,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:48:49.746+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"07e7d842-2cd2-467d-b05a-cf0d92ed0835","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"36c73ea5-dc92-47a2-bf70-8acfecccf90f","timestampMs":1750258129735,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:48:49.747+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-18T14:48:50.426+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"3801915a-f127-47c5-b352-a498d9389f4a","timestampMs":1750258130109,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:48:50.426+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=3801915a-f127-47c5-b352-a498d9389f4a, timestampMs=1750258130109, name=xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-4225fdb5-7079-4522-827b-e59cf4ff76ca, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[], policiesToBeUndeployed=[]) policy-xacml-pdp | [2025-06-18T14:48:50.427+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"3801915a-f127-47c5-b352-a498d9389f4a","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"1a755ab2-772f-476d-894f-8e9ee6c5abb0","timestampMs":1750258130427,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:48:50.439+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"3801915a-f127-47c5-b352-a498d9389f4a","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"1a755ab2-772f-476d-894f-8e9ee6c5abb0","timestampMs":1750258130427,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:48:50.439+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-18T14:49:05.021+00:00|INFO|RequestLog|qtp2014233765-31] 172.17.0.1 - - [18/Jun/2025:14:49:04 +0000] "GET / HTTP/1.1" 401 423 "" "curl/7.58.0" policy-xacml-pdp | [2025-06-18T14:49:35.602+00:00|INFO|RequestLog|qtp2014233765-27] 172.17.0.2 - policyadmin [18/Jun/2025:14:49:35 +0000] "GET /metrics HTTP/1.1" 200 2135 "" "Prometheus/3.4.1" policy-xacml-pdp | [2025-06-18T14:49:55.334+00:00|INFO|RequestLog|qtp2014233765-30] 172.17.0.5 - policyadmin [18/Jun/2025:14:49:55 +0000] "GET /policy/pdpx/v1/healthcheck?null HTTP/1.1" 200 110 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-18T14:49:55.355+00:00|INFO|RequestLog|qtp2014233765-30] 172.17.0.5 - policyadmin [18/Jun/2025:14:49:55 +0000] "GET /metrics?null HTTP/1.1" 200 2050 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-18T14:49:56.898+00:00|INFO|GuardTranslator|qtp2014233765-33] Converting Request DecisionRequest(onapName=Guard, onapComponent=Guard-component, onapInstance=Guard-component-instance, requestId=unique-request-guard-1, context=null, action=guard, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={guard={actor=APPC, operation=ModifyConfig, target=f17face5-69cb-4c88-9e0b-7426db7edddd, requestId=c7c6a4aa-bb61-4a15-b831-ba1472dd4a65, clname=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}}) policy-xacml-pdp | [2025-06-18T14:49:56.917+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-dateTime policy-xacml-pdp | [2025-06-18T14:49:56.917+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-date policy-xacml-pdp | [2025-06-18T14:49:56.917+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-time policy-xacml-pdp | [2025-06-18T14:49:56.917+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:org:onap:guard:timezone policy-xacml-pdp | [2025-06-18T14:49:56.918+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:org:onap:guard:target:vf-count policy-xacml-pdp | [2025-06-18T14:49:56.918+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-name policy-xacml-pdp | [2025-06-18T14:49:56.918+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-id policy-xacml-pdp | [2025-06-18T14:49:56.918+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-type policy-xacml-pdp | [2025-06-18T14:49:56.918+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.nf-naming-code policy-xacml-pdp | [2025-06-18T14:49:56.918+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:org:onap:guard:target:vserver.vserver-id policy-xacml-pdp | [2025-06-18T14:49:56.918+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:org:onap:guard:target:cloud-region.cloud-region-id policy-xacml-pdp | [2025-06-18T14:49:56.922+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Constructed using properties {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-18T14:49:56.922+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-18T14:49:56.922+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Combining root policies with urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides policy-xacml-pdp | [2025-06-18T14:49:56.928+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Root Policies: 1 policy-xacml-pdp | [2025-06-18T14:49:56.928+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Referenced Policies: 0 policy-xacml-pdp | [2025-06-18T14:49:56.929+00:00|INFO|StdPolicyFinder|qtp2014233765-33] Updating policy map with policy c6dc731a-c913-4e97-a008-1c8d07a37115 version 1.0 policy-xacml-pdp | [2025-06-18T14:49:56.931+00:00|INFO|StdOnapPip|qtp2014233765-33] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-18T14:49:57.010+00:00|INFO|LogHelper|qtp2014233765-33] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] policy-xacml-pdp | [2025-06-18T14:49:57.041+00:00|INFO|Version|qtp2014233765-33] HHH000412: Hibernate ORM core version 6.6.16.Final policy-xacml-pdp | [2025-06-18T14:49:57.061+00:00|INFO|RegionFactoryInitiator|qtp2014233765-33] HHH000026: Second-level cache disabled policy-xacml-pdp | [2025-06-18T14:49:57.200+00:00|WARN|pooling|qtp2014233765-33] HHH10001002: Using built-in connection pool (not intended for production use) policy-xacml-pdp | [2025-06-18T14:49:57.404+00:00|INFO|pooling|qtp2014233765-33] HHH10001005: Database info: policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] policy-xacml-pdp | Database driver: org.postgresql.Driver policy-xacml-pdp | Database version: 16.4 policy-xacml-pdp | Autocommit mode: false policy-xacml-pdp | Isolation level: undefined/unknown policy-xacml-pdp | Minimum pool size: 1 policy-xacml-pdp | Maximum pool size: 20 policy-xacml-pdp | [2025-06-18T14:49:58.315+00:00|INFO|JtaPlatformInitiator|qtp2014233765-33] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-xacml-pdp | [2025-06-18T14:49:58.347+00:00|INFO|StdOnapPip|qtp2014233765-33] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-18T14:49:58.351+00:00|INFO|LogHelper|qtp2014233765-33] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] policy-xacml-pdp | [2025-06-18T14:49:58.353+00:00|INFO|RegionFactoryInitiator|qtp2014233765-33] HHH000026: Second-level cache disabled policy-xacml-pdp | [2025-06-18T14:49:58.371+00:00|WARN|pooling|qtp2014233765-33] HHH10001002: Using built-in connection pool (not intended for production use) policy-xacml-pdp | [2025-06-18T14:49:58.398+00:00|INFO|pooling|qtp2014233765-33] HHH10001005: Database info: policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] policy-xacml-pdp | Database driver: org.postgresql.Driver policy-xacml-pdp | Database version: 16.4 policy-xacml-pdp | Autocommit mode: false policy-xacml-pdp | Isolation level: undefined/unknown policy-xacml-pdp | Minimum pool size: 1 policy-xacml-pdp | Maximum pool size: 20 policy-xacml-pdp | [2025-06-18T14:49:58.428+00:00|INFO|JtaPlatformInitiator|qtp2014233765-33] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-xacml-pdp | [2025-06-18T14:49:58.432+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-33] Elapsed Time: 1514ms policy-xacml-pdp | [2025-06-18T14:49:58.432+00:00|INFO|GuardTranslator|qtp2014233765-33] Converting Response {results=[{decision=NotApplicable,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component-instance}],includeInResults=true}{attributeId=urn:org:onap:guard:request:request-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=unique-request-guard-1}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:guard:clname:clname-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}],includeInResults=true}{attributeId=urn:org:onap:guard:actor:actor-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=APPC}],includeInResults=true}{attributeId=urn:org:onap:guard:operation:operation-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ModifyConfig}],includeInResults=true}{attributeId=urn:org:onap:guard:target:target-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=f17face5-69cb-4c88-9e0b-7426db7edddd}],includeInResults=true}]}]}]} policy-xacml-pdp | [2025-06-18T14:49:58.436+00:00|INFO|RequestLog|qtp2014233765-33] 172.17.0.5 - policyadmin [18/Jun/2025:14:49:56 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 19 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-18T14:49:59.037+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"be26d6c0-daac-401d-9515-730726e32bcc","timestampMs":1750258198953,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:49:59.038+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=be26d6c0-daac-401d-9515-730726e32bcc, timestampMs=1750258198953, name=xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-4225fdb5-7079-4522-827b-e59cf4ff76ca, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.monitoring.tcagen2, typeVersion=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}})), ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.optimization.resource.AffinityPolicy, typeVersion=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}))], policiesToBeUndeployed=[]) policy-xacml-pdp | [2025-06-18T14:49:59.039+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: onap.restart.tca type: onap.policies.monitoring.tcagen2 weight: null policy: policy-xacml-pdp | {"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}} policy-xacml-pdp | [2025-06-18T14:49:59.060+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.restart.tca policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.monitoring.tcagen2 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.monitoring.tcagen2 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 1.0.0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.restart.tca policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.monitoring.tcagen2 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-18T14:49:59.060+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties policy-xacml-pdp | [2025-06-18T14:49:59.061+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} into application monitoring policy-xacml-pdp | [2025-06-18T14:49:59.061+00:00|INFO|OptimizationPdpApplication|KAFKA-source-policy-pdp-pap] optimization can support onap.policies.optimization.resource.AffinityPolicy 1.0.0 policy-xacml-pdp | [2025-06-18T14:49:59.061+00:00|ERROR|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] PolicyType not found in data area yet /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml policy-xacml-pdp | java.nio.file.NoSuchFileException: /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) policy-xacml-pdp | at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:218) policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:380) policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:432) policy-xacml-pdp | at java.base/java.nio.file.Files.readAllBytes(Files.java:3288) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.loadPolicyType(StdMatchableTranslator.java:515) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.findPolicyType(StdMatchableTranslator.java:480) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.convertPolicy(StdMatchableTranslator.java:241) policy-xacml-pdp | at org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplicationTranslator.convertPolicy(OptimizationPdpApplicationTranslator.java:72) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdXacmlApplicationServiceProvider.loadPolicy(StdXacmlApplicationServiceProvider.java:127) policy-xacml-pdp | at org.onap.policy.pdpx.main.rest.XacmlPdpApplicationManager.loadDeployedPolicy(XacmlPdpApplicationManager.java:199) policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.XacmlPdpUpdatePublisher.handlePdpUpdate(XacmlPdpUpdatePublisher.java:91) policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:72) policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:36) policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.ScoListener.onTopicEvent(ScoListener.java:75) policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher.onTopicEvent(MessageTypeDispatcher.java:97) policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.JsonListener.onTopicEvent(JsonListener.java:61) policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.TopicBase.broadcast(TopicBase.java:170) policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.fetchAllMessages(SingleThreadedBusTopicSource.java:252) policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.run(SingleThreadedBusTopicSource.java:235) policy-xacml-pdp | at java.base/java.lang.Thread.run(Thread.java:840) policy-xacml-pdp | [2025-06-18T14:49:59.090+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls policy-xacml-pdp | [2025-06-18T14:49:59.092+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls policy-xacml-pdp | [2025-06-18T14:49:59.534+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] Successfully pulled onap.policies.optimization.resource.AffinityPolicy 1.0.0 policy-xacml-pdp | [2025-06-18T14:49:59.572+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.resource.AffinityPolicy:1.0.0 policy-xacml-pdp | [2025-06-18T14:49:59.572+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Retrieving datatype policy.data.affinityProperties_properties policy-xacml-pdp | [2025-06-18T14:49:59.572+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.Resource:1.0.0 policy-xacml-pdp | [2025-06-18T14:49:59.573+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.Optimization:1.0.0 policy-xacml-pdp | [2025-06-18T14:49:59.573+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Found root - done scanning policy-xacml-pdp | [2025-06-18T14:49:59.573+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: OSDF_CASABLANCA.Affinity_Default type: onap.policies.optimization.resource.AffinityPolicy weight: 0 policy: policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} policy-xacml-pdp | [2025-06-18T14:49:59.590+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | IF exists and is equal policy-xacml-pdp | policy-xacml-pdp | Does the policy-type attribute exist? policy-xacml-pdp | policy-xacml-pdp | Get the size of policy-type attributes policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Is this policy-type in the list? policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | OSDF_CASABLANCA.Affinity_Default policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-18T14:49:59.605+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | IF exists and is equal policy-xacml-pdp | policy-xacml-pdp | Does the policy-type attribute exist? policy-xacml-pdp | policy-xacml-pdp | Get the size of policy-type attributes policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Is this policy-type in the list? policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | OSDF_CASABLANCA.Affinity_Default policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-18T14:49:59.605+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/optimization/xacml.properties policy-xacml-pdp | [2025-06-18T14:49:59.605+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0} into application optimization policy-xacml-pdp | [2025-06-18T14:49:59.606+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"be26d6c0-daac-401d-9515-730726e32bcc","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"77085c87-cdf1-4096-af1b-fdea37eebc95","timestampMs":1750258199606,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:49:59.613+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"be26d6c0-daac-401d-9515-730726e32bcc","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"77085c87-cdf1-4096-af1b-fdea37eebc95","timestampMs":1750258199606,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:49:59.613+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-18T14:50:23.230+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) policy-xacml-pdp | [2025-06-18T14:50:23.232+00:00|WARN|RequestParser|qtp2014233765-29] Unable to extract attribute value from object: urn:org:onap:policy-type policy-xacml-pdp | [2025-06-18T14:50:23.233+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-29] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | [2025-06-18T14:50:23.233+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-29] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-18T14:50:23.233+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-29] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-18T14:50:23.234+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-29] Loading policy file /opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml policy-xacml-pdp | [2025-06-18T14:50:23.255+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-29] Root Policies: 1 policy-xacml-pdp | [2025-06-18T14:50:23.255+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-29] Referenced Policies: 0 policy-xacml-pdp | [2025-06-18T14:50:23.255+00:00|INFO|StdPolicyFinder|qtp2014233765-29] Updating policy map with policy 59aa64d7-4455-4e5c-8715-145aa66668a8 version 1.0 policy-xacml-pdp | [2025-06-18T14:50:23.255+00:00|INFO|StdPolicyFinder|qtp2014233765-29] Updating policy map with policy onap.restart.tca version 1.0.0 policy-xacml-pdp | [2025-06-18T14:50:23.271+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-29] Elapsed Time: 39ms policy-xacml-pdp | [2025-06-18T14:50:23.271+00:00|INFO|StdBaseTranslator|qtp2014233765-29] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=59aa64d7-4455-4e5c-8715-145aa66668a8,version=1.0}]}]} policy-xacml-pdp | [2025-06-18T14:50:23.272+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-18T14:50:23.272+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator policy-xacml-pdp | [2025-06-18T14:50:23.272+00:00|INFO|MonitoringPdpApplication|qtp2014233765-29] Abbreviating decision results DecisionResponse(status=null, message=null, advice=null, obligations=null, policies={onap.restart.tca={type=onap.policies.monitoring.tcagen2, type_version=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}}, name=onap.restart.tca, version=1.0.0, metadata={policy-id=onap.restart.tca, policy-version=1.0.0}}}, attributes=null) policy-xacml-pdp | [2025-06-18T14:50:23.274+00:00|INFO|RequestLog|qtp2014233765-29] 172.17.0.5 - policyadmin [18/Jun/2025:14:50:23 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 146 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-18T14:50:23.287+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-32] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) policy-xacml-pdp | [2025-06-18T14:50:23.287+00:00|WARN|RequestParser|qtp2014233765-32] Unable to extract attribute value from object: urn:org:onap:policy-type policy-xacml-pdp | [2025-06-18T14:50:23.288+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-32] Elapsed Time: 1ms policy-xacml-pdp | [2025-06-18T14:50:23.288+00:00|INFO|StdBaseTranslator|qtp2014233765-32] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=59aa64d7-4455-4e5c-8715-145aa66668a8,version=1.0}]}]} policy-xacml-pdp | [2025-06-18T14:50:23.288+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-32] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-18T14:50:23.289+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-32] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator policy-xacml-pdp | [2025-06-18T14:50:23.289+00:00|INFO|MonitoringPdpApplication|qtp2014233765-32] Unsupported query param for Monitoring application: {null=[]} policy-xacml-pdp | [2025-06-18T14:50:23.291+00:00|INFO|RequestLog|qtp2014233765-32] 172.17.0.5 - policyadmin [18/Jun/2025:14:50:23 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1055 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-18T14:50:23.305+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-26] Converting Request DecisionRequest(onapName=SDNC, onapComponent=SDNC-component, onapInstance=SDNC-component-instance, requestId=unique-request-sdnc-1, context=null, action=naming, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={nfRole=[], naming-type=[], property-name=[], policy-type=[onap.policies.Naming]}) policy-xacml-pdp | [2025-06-18T14:50:23.306+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:resource:resource-id policy-xacml-pdp | [2025-06-18T14:50:23.306+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | [2025-06-18T14:50:23.306+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-18T14:50:23.306+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-18T14:50:23.307+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Loading policy file /opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml policy-xacml-pdp | [2025-06-18T14:50:23.314+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Root Policies: 1 policy-xacml-pdp | [2025-06-18T14:50:23.314+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Referenced Policies: 0 policy-xacml-pdp | [2025-06-18T14:50:23.314+00:00|INFO|StdPolicyFinder|qtp2014233765-26] Updating policy map with policy f453ac3d-1a0d-4644-81a0-0b16ae22e922 version 1.0 policy-xacml-pdp | [2025-06-18T14:50:23.314+00:00|INFO|StdPolicyFinder|qtp2014233765-26] Updating policy map with policy SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP version 1.0.0 policy-xacml-pdp | [2025-06-18T14:50:23.315+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-26] Elapsed Time: 9ms policy-xacml-pdp | [2025-06-18T14:50:23.315+00:00|INFO|StdBaseTranslator|qtp2014233765-26] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component-instance}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:policy-type,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}],includeInResults=true}]}],policyIdentifiers=[{id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP,version=1.0.0}],policySetIdentifiers=[{id=f453ac3d-1a0d-4644-81a0-0b16ae22e922,version=1.0}]}]} policy-xacml-pdp | [2025-06-18T14:50:23.316+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-26] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-18T14:50:23.316+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-26] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator policy-xacml-pdp | [2025-06-18T14:50:23.318+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.5 - policyadmin [18/Jun/2025:14:50:23 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1598 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-18T14:50:23.334+00:00|INFO|StdMatchableTranslator|qtp2014233765-26] Converting Request DecisionRequest(onapName=OOF, onapComponent=OOF-component, onapInstance=OOF-component-instance, requestId=null, context={subscriberName=[]}, action=optimize, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={scope=[], services=[], resources=[], geography=[]}) policy-xacml-pdp | [2025-06-18T14:50:23.337+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | [2025-06-18T14:50:23.337+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-18T14:50:23.337+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-18T14:50:23.337+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Loading policy file /opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml policy-xacml-pdp | [2025-06-18T14:50:23.344+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Root Policies: 1 policy-xacml-pdp | [2025-06-18T14:50:23.344+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Referenced Policies: 0 policy-xacml-pdp | [2025-06-18T14:50:23.344+00:00|INFO|StdPolicyFinder|qtp2014233765-26] Updating policy map with policy c867d517-8764-4274-b8fe-a75b00e16b2d version 1.0 policy-xacml-pdp | [2025-06-18T14:50:23.344+00:00|INFO|StdPolicyFinder|qtp2014233765-26] Updating policy map with policy OSDF_CASABLANCA.Affinity_Default version 1.0.0 policy-xacml-pdp | [2025-06-18T14:50:23.345+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-26] Elapsed Time: 8ms policy-xacml-pdp | [2025-06-18T14:50:23.345+00:00|INFO|StdBaseTranslator|qtp2014233765-26] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OSDF_CASABLANCA.Affinity_Default}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:weight,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#integer,value=0}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.optimization.resource.AffinityPolicy}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component-instance}],includeInResults=true}]}],policyIdentifiers=[{id=OSDF_CASABLANCA.Affinity_Default,version=1.0.0}],policySetIdentifiers=[{id=c867d517-8764-4274-b8fe-a75b00e16b2d,version=1.0}]}]} policy-xacml-pdp | [2025-06-18T14:50:23.345+00:00|INFO|StdMatchableTranslator|qtp2014233765-26] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-18T14:50:23.346+00:00|INFO|StdMatchableTranslator|qtp2014233765-26] New entry onap.policies.optimization.resource.AffinityPolicy weight 0 policy-xacml-pdp | [2025-06-18T14:50:23.347+00:00|INFO|StdMatchableTranslator|qtp2014233765-26] Policy (OSDF_CASABLANCA.Affinity_Default,{type=onap.policies.optimization.resource.AffinityPolicy, type_version=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}, name=OSDF_CASABLANCA.Affinity_Default, version=1.0.0, metadata={policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0}}) policy-xacml-pdp | [2025-06-18T14:50:23.348+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.5 - policyadmin [18/Jun/2025:14:50:23 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 467 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-18T14:50:23.746+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-4225fdb5-7079-4522-827b-e59cf4ff76ca","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"559d67e5-82e0-4cf4-8cb0-c97fb23bfe92","timestampMs":1750258223699,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:50:23.747+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=559d67e5-82e0-4cf4-8cb0-c97fb23bfe92, timestampMs=1750258223699, name=xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-4225fdb5-7079-4522-827b-e59cf4ff76ca, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[], policiesToBeUndeployed=[onap.restart.tca 1.0.0]) policy-xacml-pdp | [2025-06-18T14:50:23.747+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 policy-xacml-pdp | [2025-06-18T14:50:23.747+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 policy-xacml-pdp | [2025-06-18T14:50:23.748+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 policy-xacml-pdp | [2025-06-18T14:50:23.748+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 policy-xacml-pdp | [2025-06-18T14:50:23.748+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 policy-xacml-pdp | [2025-06-18T14:50:23.749+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties policy-xacml-pdp | [2025-06-18T14:50:23.750+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Unloaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} from application monitoring policy-xacml-pdp | [2025-06-18T14:50:23.750+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"559d67e5-82e0-4cf4-8cb0-c97fb23bfe92","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"14712725-63cd-49d0-a53d-c488a21ded08","timestampMs":1750258223750,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:50:23.759+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"559d67e5-82e0-4cf4-8cb0-c97fb23bfe92","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"14712725-63cd-49d0-a53d-c488a21ded08","timestampMs":1750258223750,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:50:23.759+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-18T14:50:35.577+00:00|INFO|RequestLog|qtp2014233765-33] 172.17.0.2 - policyadmin [18/Jun/2025:14:50:35 +0000] "GET /metrics HTTP/1.1" 200 2215 "" "Prometheus/3.4.1" policy-xacml-pdp | [2025-06-18T14:50:49.698+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=99420b24-4d92-41be-9d27-c64abab4bdab, timestampMs=1750258249698, name=xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=ACTIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0, OSDF_CASABLANCA.Affinity_Default 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) policy-xacml-pdp | [2025-06-18T14:50:49.698+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"99420b24-4d92-41be-9d27-c64abab4bdab","timestampMs":1750258249698,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:50:49.709+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"99420b24-4d92-41be-9d27-c64abab4bdab","timestampMs":1750258249698,"name":"xacml-49ec1dcf-f71b-49b2-9b5f-5a6ceb36ad85","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-18T14:50:49.709+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | waiting for server to start....2025-06-18 14:47:59.448 UTC [49] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-18 14:47:59.450 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-18 14:47:59.457 UTC [52] LOG: database system was shut down at 2025-06-18 14:47:58 UTC postgres | 2025-06-18 14:47:59.462 UTC [49] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | waiting for server to shut down....2025-06-18 14:48:00.828 UTC [49] LOG: received fast shutdown request postgres | 2025-06-18 14:48:00.830 UTC [49] LOG: aborting any active transactions postgres | 2025-06-18 14:48:00.832 UTC [49] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1 postgres | 2025-06-18 14:48:00.835 UTC [50] LOG: shutting down postgres | 2025-06-18 14:48:00.837 UTC [50] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-18 14:48:01.295 UTC [50] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.383 s, sync=0.067 s, total=0.460 s; sync files=1788, longest=0.006 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-18 14:48:01.306 UTC [49] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-18 14:48:01.353 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-18 14:48:01.353 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-18 14:48:01.353 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-18 14:48:01.356 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-18 14:48:01.362 UTC [102] LOG: database system was shut down at 2025-06-18 14:48:01 UTC postgres | 2025-06-18 14:48:01.367 UTC [1] LOG: database system is ready to accept connections prometheus | time=2025-06-18T14:48:02.510Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-18T14:48:02.511Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-18T14:48:02.511Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-18T14:48:02.514Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-18T14:48:02.516Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-18T14:48:02.517Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-18T14:48:02.519Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-18T14:48:02.519Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-18T14:48:02.522Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-18T14:48:02.522Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.67µs prometheus | time=2025-06-18T14:48:02.522Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-18T14:48:02.528Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=5.001224ms prometheus | time=2025-06-18T14:48:02.528Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=30.511µs wal_replay_duration=5.060655ms wbl_replay_duration=220ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.67µs total_replay_duration=5.280439ms prometheus | time=2025-06-18T14:48:02.535Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-18T14:48:02.535Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-18T14:48:02.535Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-18T14:48:02.536Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-18T14:48:02.536Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.39µs remote_storage=1.73µs web_handler=770ns query_engine=1.14µs scrape=271.196µs scrape_sd=144.503µs notify=105.442µs notify_sd=11.35µs rules=2.3µs tracing=5.721µs filename=/etc/prometheus/prometheus.yml totalDuration=1.131504ms prometheus | time=2025-06-18T14:48:02.536Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-18T14:48:02.536Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-18 14:48:02,580] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 14:48:02,582] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 14:48:02,582] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 14:48:02,582] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 14:48:02,582] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 14:48:02,583] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-18 14:48:02,583] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-18 14:48:02,583] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-18 14:48:02,583] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-18 14:48:02,584] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-18 14:48:02,585] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 14:48:02,585] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 14:48:02,585] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 14:48:02,585] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 14:48:02,585] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 14:48:02,585] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-18 14:48:02,594] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-18 14:48:02,596] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-18 14:48:02,596] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-18 14:48:02,598] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-18 14:48:02,606] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,606] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,606] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,607] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,607] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,607] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,607] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,607] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,607] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,607] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,608] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,609] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,609] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,609] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,609] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,609] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,609] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,609] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,609] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,610] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-18 14:48:02,610] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,610] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,611] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-18 14:48:02,611] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-18 14:48:02,612] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-18 14:48:02,612] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-18 14:48:02,612] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-18 14:48:02,612] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-18 14:48:02,612] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-18 14:48:02,612] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-18 14:48:02,614] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,614] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,615] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-18 14:48:02,615] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-18 14:48:02,615] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,634] INFO Logging initialized @386ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-18 14:48:02,683] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-18 14:48:02,683] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-18 14:48:02,697] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-18 14:48:02,725] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-18 14:48:02,725] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-18 14:48:02,726] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-18 14:48:02,729] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-18 14:48:02,737] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-18 14:48:02,746] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-18 14:48:02,746] INFO Started @502ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-18 14:48:02,746] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-18 14:48:02,750] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-18 14:48:02,751] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-18 14:48:02,752] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-18 14:48:02,753] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-18 14:48:02,762] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-18 14:48:02,762] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-18 14:48:02,763] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-18 14:48:02,763] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-18 14:48:02,767] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-18 14:48:02,767] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-18 14:48:02,769] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-18 14:48:02,769] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-18 14:48:02,770] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 14:48:02,777] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-18 14:48:02,779] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-18 14:48:02,789] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-18 14:48:02,789] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-18 14:48:04,885] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container policy-xacml-pdp Stopping Container grafana Stopping Container policy-csit Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-xacml-pdp Stopped Container policy-xacml-pdp Removing Container policy-xacml-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container kafka Stopping Container policy-api Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2093 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins3009221598065712387.sh ---> sysstat.sh [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins11948494592928851680.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ']' + mkdir -p /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/archives/ [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins3269615754106623927.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ah7R from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ah7R/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins4267243280875181687.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp@tmp/config883296254324461816tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins3523427814259850414.sh ---> create-netrc.sh [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins12159846004599474721.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ah7R from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ah7R/bin to PATH [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins2723611629212972366.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins6322787231098384006.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ah7R from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-ah7R/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash -l /tmp/jenkins6064834936500942528.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ah7R from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ah7R/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/818 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-22113 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 910 24257 0 6998 30801 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:38:db:63 brd ff:ff:ff:ff:ff:ff inet 10.30.107.27/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85946sec preferred_lft 85946sec inet6 fe80::f816:3eff:fe38:db63/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:81:50:e4:12 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:81ff:fe50:e412/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22113) 06/18/25 _x86_64_ (8 CPU) 14:45:10 LINUX RESTART (8 CPU) 14:46:02 tps rtps wtps bread/s bwrtn/s 14:47:01 120.30 19.49 100.81 2281.24 23497.03 14:48:01 501.60 5.87 495.73 464.72 185049.56 14:49:01 178.87 0.12 178.76 8.80 22619.53 14:50:01 226.00 0.70 225.30 44.53 47342.11 14:51:01 15.93 0.00 15.93 0.00 14027.93 14:52:01 68.57 0.63 67.94 33.86 14849.93 Average: 185.39 4.43 180.97 467.14 51307.46 14:46:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 14:47:01 29423364 31612828 3515856 10.67 78236 2409108 1932868 5.69 958904 2233700 615548 14:48:01 25165792 31407428 7773428 23.60 146244 6199072 4523980 13.31 1275636 5902932 3196 14:49:01 23416380 29845464 9522840 28.91 159172 6375292 7828712 23.03 3023504 5861976 448 14:50:01 22587336 29540384 10351884 31.43 199756 6803544 8314548 24.46 3434936 6215240 2292 14:51:01 22650280 29604388 10288940 31.24 199900 6804444 8249160 24.27 3374612 6212108 344 14:52:01 24902220 31593924 8037000 24.40 200612 6536076 1548008 4.55 1442436 5968372 888 Average: 24690895 30600736 8248325 25.04 163987 5854589 5399546 15.89 2251671 5399055 103786 14:46:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 14:47:01 lo 10.10 10.10 0.95 0.95 0.00 0.00 0.00 0.00 14:47:01 ens3 548.25 373.41 7547.79 35.53 0.00 0.00 0.00 0.00 14:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:48:01 vetheb37e7d 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:48:01 veth8d6aca6 0.00 0.13 0.00 0.01 0.00 0.00 0.00 0.00 14:48:01 veth0517b7e 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:48:01 vethd4273dc 0.12 0.23 0.01 0.02 0.00 0.00 0.00 0.00 14:49:01 veth8d6aca6 5.61 7.53 0.88 1.02 0.00 0.00 0.00 0.00 14:49:01 veth1ecbc5c 3.40 3.67 0.45 0.43 0.00 0.00 0.00 0.00 14:49:01 veth79ceec8 52.52 77.94 4.16 310.71 0.00 0.00 0.00 0.03 14:49:01 br-7e49ce13b96d 52.72 77.76 3.44 310.70 0.00 0.00 0.00 0.00 14:50:01 veth8d6aca6 0.17 0.47 0.01 0.03 0.00 0.00 0.00 0.00 14:50:01 veth379972a 0.73 0.77 0.53 0.29 0.00 0.00 0.00 0.00 14:50:01 veth1ecbc5c 4.27 5.75 0.85 0.64 0.00 0.00 0.00 0.00 14:50:01 veth79ceec8 0.52 0.45 0.03 0.03 0.00 0.00 0.00 0.00 14:51:01 veth8d6aca6 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 14:51:01 veth379972a 1.32 1.10 0.15 0.21 0.00 0.00 0.00 0.00 14:51:01 veth1ecbc5c 3.88 5.40 0.68 0.48 0.00 0.00 0.00 0.00 14:51:01 veth79ceec8 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 14:52:01 lo 26.10 26.10 2.34 2.34 0.00 0.00 0.00 0.00 14:52:01 ens3 2260.41 1342.59 36539.83 198.54 0.00 0.00 0.00 0.00 14:52:01 docker0 114.81 169.32 7.59 1347.12 0.00 0.00 0.00 0.00 Average: lo 3.64 3.64 0.33 0.33 0.00 0.00 0.00 0.00 Average: ens3 320.10 185.13 5937.05 21.34 0.00 0.00 0.00 0.00 Average: docker0 19.19 28.30 1.27 225.14 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22113) 06/18/25 _x86_64_ (8 CPU) 14:45:10 LINUX RESTART (8 CPU) 14:46:02 CPU %user %nice %system %iowait %steal %idle 14:47:01 all 8.57 0.00 1.07 3.80 0.03 86.53 14:47:01 0 7.71 0.00 1.34 16.86 0.05 74.04 14:47:01 1 18.96 0.00 1.27 1.87 0.05 77.85 14:47:01 2 6.77 0.00 0.81 0.31 0.03 92.08 14:47:01 3 11.32 0.00 1.50 0.87 0.03 86.28 14:47:01 4 4.96 0.00 0.81 1.22 0.03 92.97 14:47:01 5 2.08 0.00 0.65 0.02 0.02 97.24 14:47:01 6 14.15 0.00 1.67 9.18 0.05 74.95 14:47:01 7 2.64 0.00 0.49 0.07 0.02 96.78 14:48:01 all 14.32 0.00 6.97 13.76 0.07 64.88 14:48:01 0 12.04 0.00 8.63 59.73 0.07 19.53 14:48:01 1 14.12 0.00 6.88 12.26 0.05 66.69 14:48:01 2 15.05 0.00 7.06 3.48 0.05 74.36 14:48:01 3 14.89 0.00 7.30 1.73 0.07 76.01 14:48:01 4 15.32 0.00 6.03 7.50 0.05 71.09 14:48:01 5 15.15 0.00 6.72 1.88 0.07 76.18 14:48:01 6 14.23 0.00 7.44 23.36 0.07 54.90 14:48:01 7 13.71 0.00 5.74 1.25 0.07 79.24 14:49:01 all 28.14 0.00 3.47 2.56 0.09 65.75 14:49:01 0 23.74 0.00 2.99 1.23 0.08 71.96 14:49:01 1 29.78 0.00 3.84 2.73 0.08 63.57 14:49:01 2 25.88 0.00 3.53 9.11 0.08 61.40 14:49:01 3 30.20 0.00 3.90 0.74 0.08 65.08 14:49:01 4 32.25 0.00 3.75 1.90 0.10 62.00 14:49:01 5 27.92 0.00 3.22 1.86 0.10 66.90 14:49:01 6 32.79 0.00 3.57 1.49 0.10 62.06 14:49:01 7 22.50 0.00 2.97 1.46 0.07 73.00 14:50:01 all 11.68 0.00 2.77 3.42 0.07 82.05 14:50:01 0 9.56 0.00 2.88 0.22 0.07 87.28 14:50:01 1 10.60 0.00 2.34 0.38 0.07 86.61 14:50:01 2 8.64 0.00 2.94 3.69 0.08 84.65 14:50:01 3 13.08 0.00 3.49 3.23 0.07 80.13 14:50:01 4 12.92 0.00 2.35 6.28 0.07 78.37 14:50:01 5 19.73 0.00 3.61 5.03 0.08 71.55 14:50:01 6 10.58 0.00 2.35 0.89 0.08 86.09 14:50:01 7 8.31 0.00 2.24 7.67 0.07 81.71 14:51:01 all 1.21 0.00 0.34 0.72 0.03 97.70 14:51:01 0 0.87 0.00 0.39 0.02 0.03 98.69 14:51:01 1 1.49 0.00 0.38 0.05 0.03 98.04 14:51:01 2 1.04 0.00 0.28 0.02 0.03 98.63 14:51:01 3 1.07 0.00 0.20 0.02 0.02 98.70 14:51:01 4 1.22 0.00 0.27 5.36 0.03 93.12 14:51:01 5 1.72 0.00 0.45 0.12 0.03 97.68 14:51:01 6 1.43 0.00 0.50 0.00 0.03 98.03 14:51:01 7 0.87 0.00 0.23 0.18 0.02 98.70 14:52:01 all 1.67 0.00 0.72 0.87 0.03 96.71 14:52:01 0 1.63 0.00 0.61 2.96 0.03 94.77 14:52:01 1 1.65 0.00 0.73 0.08 0.03 97.49 14:52:01 2 1.55 0.00 0.80 0.13 0.03 97.48 14:52:01 3 1.46 0.00 0.57 0.07 0.02 97.89 14:52:01 4 2.34 0.00 0.77 2.66 0.03 94.20 14:52:01 5 1.65 0.00 0.70 0.60 0.02 97.03 14:52:01 6 1.50 0.00 0.87 0.07 0.03 97.53 14:52:01 7 1.57 0.00 0.72 0.40 0.03 97.28 Average: all 10.92 0.00 2.55 4.17 0.05 82.31 Average: 0 9.25 0.00 2.78 13.26 0.06 74.66 Average: 1 12.74 0.00 2.57 2.88 0.05 81.75 Average: 2 9.80 0.00 2.57 2.79 0.05 84.79 Average: 3 11.99 0.00 2.82 1.11 0.05 84.04 Average: 4 11.49 0.00 2.33 4.16 0.05 81.97 Average: 5 11.38 0.00 2.56 1.58 0.05 84.43 Average: 6 12.43 0.00 2.73 5.81 0.06 78.97 Average: 7 8.26 0.00 2.06 1.84 0.04 87.79