Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-21665 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-uTVy9EDQ0Dkn/agent.2071 SSH_AGENT_PID=2073 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp@tmp/private_key_15688536647788327738.key (/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp@tmp/private_key_15688536647788327738.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 473f78ecac5fb75e5968b31a5bab95eaba72c803 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=30 Commit message: "Add Fix fail handling in ACM runtime in CSIT" > git rev-list --no-walk 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins6239765298903646518.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-ytGL lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ytGL/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-ytGL/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.36 botocore==1.38.36 bs4==0.0.2 cachetools==5.5.2 certifi==2025.6.15 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/sh /tmp/jenkins15138425634724370426.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/sh -xe /tmp/jenkins9890651721019404255.sh + /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/csit/run-project-csit.sh xacml-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.2M 100 60.2M 0 0 59.7M 0 0:00:01 0:00:01 --:--:-- 77.5M Setting project configuration for: xacml-pdp Configuring docker compose... Starting xacml-pdp using postgres + Grafana/Prometheus pap Pulling kafka Pulling policy-db-migrator Pulling grafana Pulling zookeeper Pulling xacml-pdp Pulling api Pulling prometheus Pulling postgres Pulling da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 795b910b71c0 Pulling fs layer d1bdb495a7aa Pulling fs layer 0444d3911dbb Pulling fs layer b801adf990e2 Pulling fs layer d1bdb495a7aa Waiting b801adf990e2 Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer e5d7009d9e55 Waiting c124ba1a8b26 Waiting 6394804c2196 Waiting 1ec5fb03eaee Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 5e06c6bed798 Waiting 684be6598fc9 Waiting 0d92cad902ba Waiting dcc0c3b2850c Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB 795b910b71c0 Downloading [> ] 31.67kB/2.323MB 795b910b71c0 Downloading [==================================================>] 2.323MB/2.323MB 795b910b71c0 Verifying Checksum 795b910b71c0 Download complete da9db072f522 Pulling fs layer 19ede2622bd6 Pulling fs layer 81f92f6326a0 Pulling fs layer 774184111a51 Pulling fs layer ba3bfa42d232 Pulling fs layer 8e7191d1a9d6 Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 43449fa9f0bf Pulling fs layer 81f92f6326a0 Waiting 25fd4437207e Pulling fs layer ba3bfa42d232 Waiting 774184111a51 Waiting 8e7191d1a9d6 Waiting 25fd4437207e Waiting 43449fa9f0bf Waiting da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer eca0188f477e Waiting f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer e444bcd4d577 Waiting 2e8a7df9c2ee Pulling fs layer eabd8714fec9 Waiting 10f05dd8b1db Pulling fs layer 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer 79161a3f5362 Waiting c955f6e31a04 Pulling fs layer 9c266ba63f51 Waiting 2e8a7df9c2ee Waiting 71a9f6a9ab4d Waiting f963a77d2726 Waiting 10f05dd8b1db Waiting f3a82e9f1761 Waiting 41dac8b43ba6 Waiting da3ed5db7103 Waiting c955f6e31a04 Waiting 0444d3911dbb Downloading [==================================================>] 1.2kB/1.2kB d1bdb495a7aa Downloading [> ] 539.6kB/58.78MB 0444d3911dbb Verifying Checksum 0444d3911dbb Download complete 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 6ac0e4adf315 Waiting 9fa9226be034 Waiting 1617e25568b2 Waiting 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 408012a7b118 Waiting 7221d93db8a9 Pulling fs layer f3b09c502777 Waiting 7df673c7455d Pulling fs layer 1ccde423731d Waiting bf70c5107ab5 Waiting 7221d93db8a9 Waiting 7df673c7455d Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer c4d302cc468d Waiting ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer 46eab5b44a35 Waiting e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer a83b68436f09 Pulling fs layer 787d6bee9571 Pulling fs layer 13ff0988aaea Pulling fs layer 01e0882c90d9 Waiting 531ee2cf3c0c Waiting 12c5c803443f Waiting ed54a7dee1d8 Waiting e27c75a98748 Waiting e73cb4a42719 Waiting 787d6bee9571 Waiting 4b82842ab819 Pulling fs layer a83b68436f09 Waiting 13ff0988aaea Waiting 4b82842ab819 Waiting 7e568a0dc8fb Pulling fs layer 2d429b9e73a6 Waiting 7e568a0dc8fb Waiting b801adf990e2 Downloading [==================================================>] 1.17kB/1.17kB b801adf990e2 Verifying Checksum b801adf990e2 Download complete e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Verifying Checksum e5d7009d9e55 Download complete 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 1ec5fb03eaee Download complete d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Download complete da9db072f522 Extracting [===================> ] 1.442MB/3.624MB da9db072f522 Extracting [===================> ] 1.442MB/3.624MB da9db072f522 Extracting [===================> ] 1.442MB/3.624MB da9db072f522 Extracting [===================> ] 1.442MB/3.624MB c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB d1bdb495a7aa Downloading [===========> ] 13.52MB/58.78MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete c124ba1a8b26 Downloading [====> ] 7.568MB/91.87MB d1bdb495a7aa Downloading [========================> ] 29.2MB/58.78MB 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 1e017ebebdbd Waiting 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer 55f2b468da67 Waiting c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer 82bfc142787e Waiting 46baca71a4ef Waiting b0e0ef7895f4 Waiting e040ea11fa10 Pulling fs layer c0c90eeb8aca Waiting 09d5a3f70313 Pulling fs layer 5cfb27c10ea5 Waiting 356f5c2c843b Pulling fs layer e040ea11fa10 Waiting 09d5a3f70313 Waiting 40a5eed61bb0 Waiting f18232174bc9 Pulling fs layer e60d9caeb0b8 Pulling fs layer f61a19743345 Pulling fs layer 8af57d8c9f49 Pulling fs layer c53a11b7c6fc Pulling fs layer f18232174bc9 Waiting e032d0a5e409 Pulling fs layer c49e0ee60bfb Pulling fs layer 384497dbce3b Pulling fs layer 055b9255fa03 Pulling fs layer f61a19743345 Waiting b176d7edde70 Pulling fs layer e60d9caeb0b8 Waiting 8af57d8c9f49 Waiting 384497dbce3b Waiting b176d7edde70 Waiting c49e0ee60bfb Waiting e032d0a5e409 Waiting c53a11b7c6fc Waiting 055b9255fa03 Waiting c124ba1a8b26 Downloading [========> ] 16.22MB/91.87MB d1bdb495a7aa Downloading [=====================================> ] 44.33MB/58.78MB d1bdb495a7aa Verifying Checksum d1bdb495a7aa Download complete c124ba1a8b26 Downloading [==============> ] 26.49MB/91.87MB 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Download complete 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Download complete 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Download complete 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete c124ba1a8b26 Downloading [======================> ] 40.55MB/91.87MB 96e38c8865ba Downloading [==> ] 3.243MB/71.91MB 96e38c8865ba Downloading [==> ] 3.243MB/71.91MB 96e38c8865ba Downloading [==> ] 3.243MB/71.91MB dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB c124ba1a8b26 Downloading [==============================> ] 55.69MB/91.87MB 96e38c8865ba Downloading [=======> ] 10.81MB/71.91MB 96e38c8865ba Downloading [=======> ] 10.81MB/71.91MB 96e38c8865ba Downloading [=======> ] 10.81MB/71.91MB dcc0c3b2850c Downloading [===> ] 5.406MB/76.12MB c124ba1a8b26 Downloading [=======================================> ] 72.45MB/91.87MB 96e38c8865ba Downloading [==================> ] 26.49MB/71.91MB 96e38c8865ba Downloading [==================> ] 26.49MB/71.91MB 96e38c8865ba Downloading [==================> ] 26.49MB/71.91MB dcc0c3b2850c Downloading [========> ] 13.52MB/76.12MB c124ba1a8b26 Downloading [================================================> ] 89.21MB/91.87MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Verifying Checksum eb7cda286a15 Download complete 96e38c8865ba Downloading [=============================> ] 42.71MB/71.91MB 96e38c8865ba Downloading [=============================> ] 42.71MB/71.91MB 96e38c8865ba Downloading [=============================> ] 42.71MB/71.91MB dcc0c3b2850c Downloading [==============> ] 21.63MB/76.12MB 19ede2622bd6 Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [=========================================> ] 60.01MB/71.91MB 96e38c8865ba Downloading [=========================================> ] 60.01MB/71.91MB 96e38c8865ba Downloading [=========================================> ] 60.01MB/71.91MB dcc0c3b2850c Downloading [====================> ] 31.36MB/76.12MB 19ede2622bd6 Downloading [===> ] 5.406MB/71.91MB 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Download complete 96e38c8865ba Download complete dcc0c3b2850c Downloading [=============================> ] 44.87MB/76.12MB 81f92f6326a0 Downloading [> ] 146.4kB/14.63MB 19ede2622bd6 Downloading [=======> ] 10.27MB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB dcc0c3b2850c Downloading [========================================> ] 61.64MB/76.12MB 81f92f6326a0 Downloading [==========> ] 2.948MB/14.63MB 19ede2622bd6 Downloading [===============> ] 22.17MB/71.91MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 774184111a51 Downloading [==================================================>] 1.074kB/1.074kB 774184111a51 Verifying Checksum 774184111a51 Download complete 81f92f6326a0 Downloading [=============================> ] 8.551MB/14.63MB ba3bfa42d232 Downloading [============================> ] 3.003kB/5.244kB ba3bfa42d232 Downloading [==================================================>] 5.244kB/5.244kB ba3bfa42d232 Download complete 8e7191d1a9d6 Downloading [==================================================>] 1.037kB/1.037kB 8e7191d1a9d6 Download complete 19ede2622bd6 Downloading [=========================> ] 36.22MB/71.91MB 43449fa9f0bf Download complete 25fd4437207e Downloading [=======> ] 3.002kB/19.52kB 25fd4437207e Downloading [==================================================>] 19.52kB/19.52kB 25fd4437207e Verifying Checksum 25fd4437207e Download complete 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 81f92f6326a0 Verifying Checksum 81f92f6326a0 Download complete e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete eca0188f477e Downloading [> ] 375.7kB/37.17MB eabd8714fec9 Downloading [> ] 539.6kB/375MB 19ede2622bd6 Downloading [=====================================> ] 53.53MB/71.91MB 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB eca0188f477e Downloading [===========> ] 8.666MB/37.17MB eabd8714fec9 Downloading [=> ] 9.19MB/375MB 19ede2622bd6 Downloading [=================================================> ] 70.83MB/71.91MB 19ede2622bd6 Verifying Checksum 19ede2622bd6 Download complete 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB eca0188f477e Downloading [============================> ] 21.1MB/37.17MB eabd8714fec9 Downloading [==> ] 21.09MB/375MB 19ede2622bd6 Extracting [> ] 557.1kB/71.91MB 8f10199ed94b Downloading [====================> ] 3.538MB/8.768MB 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB eca0188f477e Downloading [=================================================> ] 36.55MB/37.17MB eca0188f477e Verifying Checksum eca0188f477e Download complete f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Download complete eabd8714fec9 Downloading [====> ] 35.14MB/375MB 19ede2622bd6 Extracting [==> ] 3.899MB/71.91MB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete 96e38c8865ba Extracting [=======================> ] 33.98MB/71.91MB 96e38c8865ba Extracting [=======================> ] 33.98MB/71.91MB 96e38c8865ba Extracting [=======================> ] 33.98MB/71.91MB 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete eca0188f477e Extracting [> ] 393.2kB/37.17MB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete eabd8714fec9 Downloading [======> ] 50.82MB/375MB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete 19ede2622bd6 Extracting [=====> ] 7.799MB/71.91MB 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete f3a82e9f1761 Downloading [===> ] 3.21MB/44.41MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete 96e38c8865ba Extracting [===========================> ] 38.99MB/71.91MB 96e38c8865ba Extracting [===========================> ] 38.99MB/71.91MB 96e38c8865ba Extracting [===========================> ] 38.99MB/71.91MB eca0188f477e Extracting [=======> ] 5.898MB/37.17MB eabd8714fec9 Downloading [========> ] 65.42MB/375MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 19ede2622bd6 Extracting [========> ] 12.81MB/71.91MB f3a82e9f1761 Downloading [=======> ] 6.421MB/44.41MB eca0188f477e Extracting [=============> ] 9.83MB/37.17MB 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB eabd8714fec9 Downloading [==========> ] 81.64MB/375MB 19ede2622bd6 Extracting [============> ] 17.27MB/71.91MB f3a82e9f1761 Downloading [==========> ] 9.633MB/44.41MB da3ed5db7103 Downloading [> ] 2.162MB/127.4MB eca0188f477e Extracting [===================> ] 14.16MB/37.17MB 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB eabd8714fec9 Downloading [============> ] 97.32MB/375MB 19ede2622bd6 Extracting [================> ] 23.4MB/71.91MB f3a82e9f1761 Downloading [=============> ] 12.39MB/44.41MB eca0188f477e Extracting [========================> ] 18.48MB/37.17MB 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB eabd8714fec9 Downloading [===============> ] 113MB/375MB da3ed5db7103 Downloading [=> ] 3.784MB/127.4MB 19ede2622bd6 Extracting [====================> ] 28.97MB/71.91MB f3a82e9f1761 Downloading [=================> ] 15.14MB/44.41MB eca0188f477e Extracting [===============================> ] 23.2MB/37.17MB eabd8714fec9 Downloading [=================> ] 130.3MB/375MB 96e38c8865ba Extracting [=====================================> ] 54.59MB/71.91MB 96e38c8865ba Extracting [=====================================> ] 54.59MB/71.91MB 96e38c8865ba Extracting [=====================================> ] 54.59MB/71.91MB da3ed5db7103 Downloading [==> ] 5.946MB/127.4MB f3a82e9f1761 Downloading [====================> ] 18.35MB/44.41MB 19ede2622bd6 Extracting [========================> ] 35.65MB/71.91MB eca0188f477e Extracting [=====================================> ] 27.92MB/37.17MB eabd8714fec9 Downloading [===================> ] 148.7MB/375MB 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB da3ed5db7103 Downloading [===> ] 8.109MB/127.4MB f3a82e9f1761 Downloading [========================> ] 22.02MB/44.41MB 19ede2622bd6 Extracting [=============================> ] 41.78MB/71.91MB eabd8714fec9 Downloading [=====================> ] 164.9MB/375MB eca0188f477e Extracting [===========================================> ] 32.24MB/37.17MB 96e38c8865ba Extracting [===========================================> ] 62.95MB/71.91MB 96e38c8865ba Extracting [===========================================> ] 62.95MB/71.91MB 96e38c8865ba Extracting [===========================================> ] 62.95MB/71.91MB da3ed5db7103 Downloading [====> ] 10.81MB/127.4MB f3a82e9f1761 Downloading [=============================> ] 26.15MB/44.41MB 19ede2622bd6 Extracting [==============================> ] 44.56MB/71.91MB eabd8714fec9 Downloading [========================> ] 180MB/375MB eca0188f477e Extracting [==============================================> ] 34.6MB/37.17MB 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB da3ed5db7103 Downloading [=====> ] 12.98MB/127.4MB f3a82e9f1761 Downloading [==================================> ] 30.28MB/44.41MB 19ede2622bd6 Extracting [==================================> ] 49.02MB/71.91MB eabd8714fec9 Downloading [=========================> ] 194.1MB/375MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB da3ed5db7103 Downloading [=======> ] 18.38MB/127.4MB f3a82e9f1761 Downloading [=========================================> ] 37.16MB/44.41MB 19ede2622bd6 Extracting [====================================> ] 52.36MB/71.91MB eabd8714fec9 Downloading [===========================> ] 209.8MB/375MB da3ed5db7103 Downloading [========> ] 22.17MB/127.4MB f3a82e9f1761 Downloading [=================================================> ] 43.58MB/44.41MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete eca0188f477e Pull complete 96e38c8865ba Pull complete 96e38c8865ba Pull complete 96e38c8865ba Pull complete e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B 19ede2622bd6 Extracting [=======================================> ] 56.26MB/71.91MB e5d7009d9e55 Extracting [==================================================>] 295B/295B 795b910b71c0 Extracting [> ] 32.77kB/2.323MB 5e06c6bed798 Extracting [==================================================>] 296B/296B c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete eabd8714fec9 Downloading [==============================> ] 226.5MB/375MB 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 9fa9226be034 Downloading [> ] 15.3kB/783kB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Verifying Checksum 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB da3ed5db7103 Downloading [=============> ] 33.52MB/127.4MB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Download complete 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB eabd8714fec9 Downloading [================================> ] 247.1MB/375MB 19ede2622bd6 Extracting [=========================================> ] 59.6MB/71.91MB 795b910b71c0 Extracting [=========> ] 458.8kB/2.323MB 5e06c6bed798 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB e5d7009d9e55 Pull complete 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 795b910b71c0 Extracting [==================================================>] 2.323MB/2.323MB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB da3ed5db7103 Downloading [=================> ] 44.87MB/127.4MB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB e444bcd4d577 Pull complete 795b910b71c0 Pull complete 6ac0e4adf315 Downloading [===> ] 3.784MB/62.07MB eabd8714fec9 Downloading [==================================> ] 259.5MB/375MB 19ede2622bd6 Extracting [==========================================> ] 61.83MB/71.91MB 9fa9226be034 Pull complete 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB da3ed5db7103 Downloading [=======================> ] 58.93MB/127.4MB 1ec5fb03eaee Pull complete d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 6ac0e4adf315 Downloading [=======> ] 9.731MB/62.07MB eabd8714fec9 Downloading [====================================> ] 274.7MB/375MB d1bdb495a7aa Extracting [> ] 557.1kB/58.78MB 19ede2622bd6 Extracting [==============================================> ] 66.29MB/71.91MB 1617e25568b2 Extracting [=====================================> ] 360.4kB/480.9kB da3ed5db7103 Downloading [=============================> ] 74.61MB/127.4MB 6ac0e4adf315 Downloading [=============> ] 16.76MB/62.07MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB eabd8714fec9 Downloading [======================================> ] 288.2MB/375MB d3165a332ae3 Pull complete d1bdb495a7aa Extracting [======> ] 7.799MB/58.78MB 19ede2622bd6 Extracting [================================================> ] 70.19MB/71.91MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 0d92cad902ba Pull complete da3ed5db7103 Downloading [==================================> ] 87.59MB/127.4MB 6ac0e4adf315 Downloading [======================> ] 27.57MB/62.07MB 19ede2622bd6 Extracting [==================================================>] 71.91MB/71.91MB eabd8714fec9 Downloading [========================================> ] 305.5MB/375MB d1bdb495a7aa Extracting [============> ] 15.04MB/58.78MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB da3ed5db7103 Downloading [=======================================> ] 101.1MB/127.4MB 6ac0e4adf315 Downloading [==============================> ] 38.39MB/62.07MB 1617e25568b2 Pull complete 19ede2622bd6 Pull complete 81f92f6326a0 Extracting [> ] 163.8kB/14.63MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB eabd8714fec9 Downloading [=========================================> ] 313.6MB/375MB d1bdb495a7aa Extracting [================> ] 19.5MB/58.78MB da3ed5db7103 Downloading [==============================================> ] 118.9MB/127.4MB c124ba1a8b26 Extracting [==> ] 4.456MB/91.87MB 6ac0e4adf315 Downloading [========================================> ] 49.74MB/62.07MB dcc0c3b2850c Extracting [=====> ] 8.913MB/76.12MB eabd8714fec9 Downloading [============================================> ] 333.1MB/375MB d1bdb495a7aa Extracting [=======================> ] 27.3MB/58.78MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete 81f92f6326a0 Extracting [=> ] 327.7kB/14.63MB f3b09c502777 Downloading [> ] 539.6kB/56.52MB c124ba1a8b26 Extracting [=======> ] 13.93MB/91.87MB 6ac0e4adf315 Downloading [=================================================> ] 61.09MB/62.07MB 6ac0e4adf315 Download complete dcc0c3b2850c Extracting [==========> ] 16.71MB/76.12MB 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Verifying Checksum 408012a7b118 Download complete eabd8714fec9 Downloading [==============================================> ] 346MB/375MB 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete d1bdb495a7aa Extracting [==============================> ] 35.65MB/58.78MB 81f92f6326a0 Extracting [================> ] 4.915MB/14.63MB bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete f3b09c502777 Downloading [======> ] 7.028MB/56.52MB 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Download complete c124ba1a8b26 Extracting [==========> ] 20.05MB/91.87MB 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete dcc0c3b2850c Extracting [===============> ] 22.84MB/76.12MB eabd8714fec9 Downloading [================================================> ] 360.6MB/375MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB d1bdb495a7aa Extracting [====================================> ] 43.45MB/58.78MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 81f92f6326a0 Extracting [======================> ] 6.717MB/14.63MB f3b09c502777 Downloading [================> ] 18.38MB/56.52MB c124ba1a8b26 Extracting [================> ] 30.64MB/91.87MB dcc0c3b2850c Extracting [===================> ] 29.52MB/76.12MB eabd8714fec9 Downloading [=================================================> ] 372.5MB/375MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete d1bdb495a7aa Extracting [============================================> ] 52.36MB/58.78MB 6ac0e4adf315 Extracting [===> ] 4.456MB/62.07MB 2d429b9e73a6 Downloading [=====> ] 3.243MB/29.13MB 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete f3b09c502777 Downloading [=====================> ] 24.33MB/56.52MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB c124ba1a8b26 Extracting [====================> ] 37.88MB/91.87MB 81f92f6326a0 Extracting [============================> ] 8.356MB/14.63MB dcc0c3b2850c Extracting [=======================> ] 35.65MB/76.12MB d1bdb495a7aa Extracting [==================================================>] 58.78MB/58.78MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 6ac0e4adf315 Extracting [=====> ] 7.242MB/62.07MB f3b09c502777 Downloading [===============================> ] 35.68MB/56.52MB 2d429b9e73a6 Downloading [===================> ] 11.21MB/29.13MB d1bdb495a7aa Pull complete 0444d3911dbb Extracting [==================================================>] 1.2kB/1.2kB 0444d3911dbb Extracting [==================================================>] 1.2kB/1.2kB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete eabd8714fec9 Extracting [> ] 557.1kB/375MB c124ba1a8b26 Extracting [========================> ] 44.56MB/91.87MB 81f92f6326a0 Extracting [====================================> ] 10.81MB/14.63MB 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB dcc0c3b2850c Extracting [===========================> ] 41.78MB/76.12MB 2d429b9e73a6 Downloading [=====================================> ] 21.82MB/29.13MB f3b09c502777 Downloading [=========================================> ] 47.04MB/56.52MB 6ac0e4adf315 Extracting [=======> ] 9.47MB/62.07MB c124ba1a8b26 Extracting [===========================> ] 50.69MB/91.87MB eabd8714fec9 Extracting [=> ] 9.47MB/375MB 81f92f6326a0 Extracting [=========================================> ] 12.29MB/14.63MB 531ee2cf3c0c Downloading [================================> ] 5.324MB/8.066MB dcc0c3b2850c Extracting [================================> ] 49.02MB/76.12MB 81f92f6326a0 Extracting [==================================================>] 14.63MB/14.63MB 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete f3b09c502777 Verifying Checksum f3b09c502777 Download complete 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 0444d3911dbb Pull complete 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Download complete b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB eabd8714fec9 Extracting [=> ] 14.48MB/375MB dcc0c3b2850c Extracting [==================================> ] 52.36MB/76.12MB e73cb4a42719 Downloading [> ] 539.6kB/109.1MB c124ba1a8b26 Extracting [==============================> ] 55.71MB/91.87MB 81f92f6326a0 Pull complete 6ac0e4adf315 Extracting [========> ] 11.14MB/62.07MB a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Download complete 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB 787d6bee9571 Downloading [==================================================>] 127B/127B 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 55f2b468da67 Downloading [> ] 539.6kB/257.9MB eabd8714fec9 Extracting [==> ] 20.61MB/375MB dcc0c3b2850c Extracting [=====================================> ] 56.82MB/76.12MB e73cb4a42719 Downloading [====> ] 10.27MB/109.1MB c124ba1a8b26 Extracting [===================================> ] 64.62MB/91.87MB 6ac0e4adf315 Extracting [===========> ] 14.48MB/62.07MB b801adf990e2 Pull complete 2d429b9e73a6 Extracting [=====> ] 2.949MB/29.13MB xacml-pdp Pulled 1e017ebebdbd Downloading [======> ] 4.521MB/37.19MB 55f2b468da67 Downloading [=> ] 5.406MB/257.9MB dcc0c3b2850c Extracting [==========================================> ] 64.06MB/76.12MB 774184111a51 Pull complete e73cb4a42719 Downloading [==========> ] 22.17MB/109.1MB c124ba1a8b26 Extracting [=====================================> ] 69.63MB/91.87MB eabd8714fec9 Extracting [===> ] 22.84MB/375MB ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB 6ac0e4adf315 Extracting [==============> ] 17.83MB/62.07MB ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB 2d429b9e73a6 Extracting [==========> ] 6.193MB/29.13MB 1e017ebebdbd Downloading [============> ] 9.043MB/37.19MB 55f2b468da67 Downloading [=> ] 10.27MB/257.9MB dcc0c3b2850c Extracting [===============================================> ] 72.97MB/76.12MB e73cb4a42719 Downloading [=================> ] 37.85MB/109.1MB c124ba1a8b26 Extracting [=========================================> ] 75.76MB/91.87MB eabd8714fec9 Extracting [===> ] 25.62MB/375MB 6ac0e4adf315 Extracting [==================> ] 23.4MB/62.07MB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 2d429b9e73a6 Extracting [===============> ] 9.142MB/29.13MB 1e017ebebdbd Downloading [=================> ] 12.81MB/37.19MB 6ac0e4adf315 Extracting [=====================> ] 26.74MB/62.07MB 55f2b468da67 Downloading [===> ] 15.68MB/257.9MB e73cb4a42719 Downloading [=====================> ] 46.5MB/109.1MB eabd8714fec9 Extracting [====> ] 30.64MB/375MB c124ba1a8b26 Extracting [============================================> ] 81.89MB/91.87MB 1e017ebebdbd Downloading [==================> ] 13.94MB/37.19MB dcc0c3b2850c Pull complete ba3bfa42d232 Pull complete eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB 2d429b9e73a6 Extracting [================> ] 9.732MB/29.13MB e73cb4a42719 Downloading [=============================> ] 63.26MB/109.1MB 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB eabd8714fec9 Extracting [====> ] 36.77MB/375MB 55f2b468da67 Downloading [=====> ] 25.95MB/257.9MB c124ba1a8b26 Extracting [===============================================> ] 87.46MB/91.87MB 1e017ebebdbd Downloading [=============================> ] 21.86MB/37.19MB 2d429b9e73a6 Extracting [=====================> ] 12.68MB/29.13MB c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB eb7cda286a15 Pull complete api Pulled e73cb4a42719 Downloading [===================================> ] 77.32MB/109.1MB 6ac0e4adf315 Extracting [============================> ] 35.09MB/62.07MB eabd8714fec9 Extracting [======> ] 47.35MB/375MB 55f2b468da67 Downloading [=======> ] 40.01MB/257.9MB 1e017ebebdbd Downloading [=============================================> ] 33.54MB/37.19MB c124ba1a8b26 Pull complete 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete 8e7191d1a9d6 Pull complete 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 2d429b9e73a6 Extracting [==========================> ] 15.34MB/29.13MB 82bfc142787e Downloading [> ] 97.22kB/8.613MB e73cb4a42719 Downloading [==========================================> ] 93.54MB/109.1MB eabd8714fec9 Extracting [=======> ] 53.48MB/375MB 6ac0e4adf315 Extracting [=====================================> ] 46.24MB/62.07MB 55f2b468da67 Downloading [===========> ] 57.31MB/257.9MB 6394804c2196 Pull complete pap Pulled 2d429b9e73a6 Extracting [===============================> ] 18.58MB/29.13MB 82bfc142787e Downloading [=============> ] 2.358MB/8.613MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 43449fa9f0bf Pull complete 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB e73cb4a42719 Downloading [=================================================> ] 107.1MB/109.1MB eabd8714fec9 Extracting [=======> ] 59.05MB/375MB e73cb4a42719 Verifying Checksum 55f2b468da67 Downloading [=============> ] 70.83MB/257.9MB e73cb4a42719 Download complete 6ac0e4adf315 Extracting [=============================================> ] 56.26MB/62.07MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 46baca71a4ef Verifying Checksum 46baca71a4ef Download complete 2d429b9e73a6 Extracting [====================================> ] 21.23MB/29.13MB 82bfc142787e Downloading [=======================> ] 4.029MB/8.613MB 1e017ebebdbd Extracting [====> ] 3.539MB/37.19MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB eabd8714fec9 Extracting [========> ] 66.29MB/375MB 55f2b468da67 Downloading [================> ] 87.59MB/257.9MB 2d429b9e73a6 Extracting [==========================================> ] 24.48MB/29.13MB 25fd4437207e Pull complete 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 82bfc142787e Downloading [=================================> ] 5.701MB/8.613MB b0e0ef7895f4 Downloading [=====> ] 4.144MB/37.01MB 55f2b468da67 Downloading [===================> ] 98.4MB/257.9MB eabd8714fec9 Extracting [=========> ] 71.3MB/375MB policy-db-migrator Pulled 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 1e017ebebdbd Extracting [========> ] 6.685MB/37.19MB 2d429b9e73a6 Extracting [===========================================> ] 25.07MB/29.13MB 82bfc142787e Downloading [============================================> ] 7.667MB/8.613MB b0e0ef7895f4 Downloading [===========> ] 8.666MB/37.01MB 55f2b468da67 Downloading [======================> ] 116.2MB/257.9MB eabd8714fec9 Extracting [==========> ] 78.54MB/375MB 6ac0e4adf315 Pull complete 82bfc142787e Verifying Checksum 82bfc142787e Download complete 1e017ebebdbd Extracting [=============> ] 10.22MB/37.19MB c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum c0c90eeb8aca Download complete 2d429b9e73a6 Extracting [==============================================> ] 27.13MB/29.13MB 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Download complete 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete b0e0ef7895f4 Downloading [===================> ] 14.7MB/37.01MB 55f2b468da67 Downloading [=========================> ] 130.3MB/257.9MB eabd8714fec9 Extracting [===========> ] 86.9MB/375MB e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete f3b09c502777 Extracting [> ] 557.1kB/56.52MB 1e017ebebdbd Extracting [================> ] 12.58MB/37.19MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB b0e0ef7895f4 Downloading [===============================> ] 23.36MB/37.01MB 55f2b468da67 Downloading [============================> ] 145.4MB/257.9MB eabd8714fec9 Extracting [============> ] 93.59MB/375MB f3b09c502777 Extracting [===> ] 4.456MB/56.52MB 1e017ebebdbd Extracting [=====================> ] 16.12MB/37.19MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 09d5a3f70313 Downloading [===> ] 7.028MB/109.2MB b0e0ef7895f4 Downloading [===============================================> ] 35.04MB/37.01MB 55f2b468da67 Downloading [==============================> ] 159.5MB/257.9MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete eabd8714fec9 Extracting [=============> ] 101.4MB/375MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB f3b09c502777 Extracting [======> ] 7.799MB/56.52MB 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 1e017ebebdbd Extracting [===========================> ] 20.45MB/37.19MB 356f5c2c843b Download complete f18232174bc9 Downloading [> ] 48.06kB/3.642MB 09d5a3f70313 Downloading [=======> ] 16.76MB/109.2MB 55f2b468da67 Downloading [==================================> ] 175.7MB/257.9MB eabd8714fec9 Extracting [==============> ] 105.8MB/375MB f3b09c502777 Extracting [=========> ] 10.58MB/56.52MB 1e017ebebdbd Extracting [==================================> ] 25.56MB/37.19MB f18232174bc9 Downloading [======================> ] 1.67MB/3.642MB 09d5a3f70313 Downloading [============> ] 28.11MB/109.2MB 55f2b468da67 Downloading [=====================================> ] 190.9MB/257.9MB eabd8714fec9 Extracting [==============> ] 109.2MB/375MB f18232174bc9 Download complete f3b09c502777 Extracting [===========> ] 12.81MB/56.52MB f18232174bc9 Extracting [> ] 65.54kB/3.642MB 09d5a3f70313 Downloading [====================> ] 43.79MB/109.2MB 1e017ebebdbd Extracting [======================================> ] 28.7MB/37.19MB e60d9caeb0b8 Downloading [==================================================>] 140B/140B e60d9caeb0b8 Verifying Checksum e60d9caeb0b8 Download complete 55f2b468da67 Downloading [========================================> ] 207.6MB/257.9MB f61a19743345 Downloading [> ] 48.06kB/3.524MB eabd8714fec9 Extracting [==============> ] 112MB/375MB f3b09c502777 Extracting [==============> ] 16.15MB/56.52MB f61a19743345 Downloading [==================================================>] 3.524MB/3.524MB f61a19743345 Download complete f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 09d5a3f70313 Downloading [===========================> ] 59.47MB/109.2MB 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB 1e017ebebdbd Extracting [===========================================> ] 32.24MB/37.19MB 55f2b468da67 Downloading [===========================================> ] 222.2MB/257.9MB eabd8714fec9 Extracting [===============> ] 114.8MB/375MB f3b09c502777 Extracting [================> ] 18.38MB/56.52MB f18232174bc9 Extracting [====================================> ] 2.687MB/3.642MB 09d5a3f70313 Downloading [=================================> ] 72.45MB/109.2MB 8af57d8c9f49 Downloading [=====================================> ] 6.487MB/8.735MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 1e017ebebdbd Extracting [=============================================> ] 34.21MB/37.19MB 8af57d8c9f49 Verifying Checksum 8af57d8c9f49 Download complete 55f2b468da67 Downloading [=============================================> ] 236.3MB/257.9MB c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB c53a11b7c6fc Downloading [==================================================>] 58.08kB/58.08kB c53a11b7c6fc Verifying Checksum c53a11b7c6fc Download complete eabd8714fec9 Extracting [===============> ] 117.5MB/375MB e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB e032d0a5e409 Downloading [==================================================>] 27.77kB/27.77kB e032d0a5e409 Verifying Checksum e032d0a5e409 Download complete f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB 09d5a3f70313 Downloading [=======================================> ] 86.51MB/109.2MB c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB 55f2b468da67 Downloading [===============================================> ] 244.9MB/257.9MB 1e017ebebdbd Extracting [===============================================> ] 35MB/37.19MB eabd8714fec9 Extracting [================> ] 120.3MB/375MB 2d429b9e73a6 Pull complete 09d5a3f70313 Downloading [============================================> ] 96.24MB/109.2MB f3b09c502777 Extracting [====================> ] 23.4MB/56.52MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB f18232174bc9 Pull complete e60d9caeb0b8 Extracting [==================================================>] 140B/140B e60d9caeb0b8 Extracting [==================================================>] 140B/140B c49e0ee60bfb Downloading [==> ] 4.865MB/107.3MB 384497dbce3b Downloading [> ] 539.6kB/63.48MB eabd8714fec9 Extracting [================> ] 124.8MB/375MB 09d5a3f70313 Downloading [================================================> ] 106MB/109.2MB 1e017ebebdbd Pull complete f3b09c502777 Extracting [======================> ] 25.62MB/56.52MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 055b9255fa03 Download complete b176d7edde70 Downloading [==================================================>] 1.227kB/1.227kB b176d7edde70 Verifying Checksum b176d7edde70 Download complete c49e0ee60bfb Downloading [======> ] 14.06MB/107.3MB 46eab5b44a35 Pull complete c4d302cc468d Extracting [> ] 65.54kB/4.534MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB e60d9caeb0b8 Pull complete 384497dbce3b Downloading [===> ] 4.324MB/63.48MB f61a19743345 Extracting [> ] 65.54kB/3.524MB eabd8714fec9 Extracting [=================> ] 129.8MB/375MB f3b09c502777 Extracting [========================> ] 27.85MB/56.52MB c49e0ee60bfb Downloading [===========> ] 23.79MB/107.3MB c4d302cc468d Extracting [==================> ] 1.704MB/4.534MB 55f2b468da67 Extracting [=> ] 8.913MB/257.9MB 384497dbce3b Downloading [=======> ] 9.731MB/63.48MB eabd8714fec9 Extracting [=================> ] 133.1MB/375MB f3b09c502777 Extracting [===================================> ] 40.11MB/56.52MB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB f61a19743345 Extracting [====> ] 327.7kB/3.524MB c49e0ee60bfb Downloading [=================> ] 36.76MB/107.3MB 384497dbce3b Downloading [============> ] 15.68MB/63.48MB c4d302cc468d Pull complete 55f2b468da67 Extracting [===> ] 16.71MB/257.9MB eabd8714fec9 Extracting [==================> ] 135.9MB/375MB 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB f61a19743345 Extracting [========================> ] 1.704MB/3.524MB f3b09c502777 Extracting [========================================> ] 45.68MB/56.52MB c49e0ee60bfb Downloading [====================> ] 44.33MB/107.3MB f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 384497dbce3b Downloading [==================> ] 23.79MB/63.48MB 55f2b468da67 Extracting [====> ] 21.17MB/257.9MB eabd8714fec9 Extracting [==================> ] 139.3MB/375MB c49e0ee60bfb Downloading [===========================> ] 58.93MB/107.3MB f3b09c502777 Extracting [================================================> ] 55.15MB/56.52MB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 384497dbce3b Downloading [============================> ] 35.68MB/63.48MB c49e0ee60bfb Downloading [===============================> ] 68.12MB/107.3MB 384497dbce3b Downloading [=============================> ] 37.85MB/63.48MB eabd8714fec9 Extracting [===================> ] 142.6MB/375MB f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB 01e0882c90d9 Pull complete f61a19743345 Pull complete 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB c49e0ee60bfb Downloading [=======================================> ] 83.8MB/107.3MB 384497dbce3b Downloading [========================================> ] 50.82MB/63.48MB eabd8714fec9 Extracting [===================> ] 145.4MB/375MB f3b09c502777 Pull complete 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B 8af57d8c9f49 Extracting [======> ] 1.081MB/8.735MB 55f2b468da67 Extracting [=====> ] 29.52MB/257.9MB 531ee2cf3c0c Extracting [=====> ] 884.7kB/8.066MB 384497dbce3b Verifying Checksum 384497dbce3b Download complete c49e0ee60bfb Downloading [==============================================> ] 100.6MB/107.3MB eabd8714fec9 Extracting [===================> ] 148.2MB/375MB 8af57d8c9f49 Extracting [===========================> ] 4.817MB/8.735MB 55f2b468da67 Extracting [=======> ] 36.77MB/257.9MB c49e0ee60bfb Verifying Checksum c49e0ee60bfb Download complete 531ee2cf3c0c Extracting [==========================> ] 4.325MB/8.066MB 408012a7b118 Pull complete 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB eabd8714fec9 Extracting [====================> ] 152.1MB/375MB 8af57d8c9f49 Extracting [=============================================> ] 7.963MB/8.735MB 55f2b468da67 Extracting [=========> ] 46.79MB/257.9MB 531ee2cf3c0c Extracting [======================================> ] 6.291MB/8.066MB 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB 8af57d8c9f49 Pull complete 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB 44986281b8b9 Pull complete bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB eabd8714fec9 Extracting [====================> ] 155.4MB/375MB 55f2b468da67 Extracting [===========> ] 59.05MB/257.9MB 531ee2cf3c0c Pull complete ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB c53a11b7c6fc Pull complete e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB eabd8714fec9 Extracting [=====================> ] 160.4MB/375MB 55f2b468da67 Extracting [=============> ] 70.75MB/257.9MB ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB bf70c5107ab5 Pull complete 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 55f2b468da67 Extracting [===============> ] 79.1MB/257.9MB eabd8714fec9 Extracting [======================> ] 166.6MB/375MB eabd8714fec9 Extracting [=======================> ] 175.5MB/375MB 55f2b468da67 Extracting [=================> ] 89.69MB/257.9MB eabd8714fec9 Extracting [========================> ] 186.1MB/375MB e032d0a5e409 Pull complete 1ccde423731d Pull complete ed54a7dee1d8 Pull complete 55f2b468da67 Extracting [===================> ] 101.9MB/257.9MB eabd8714fec9 Extracting [==========================> ] 198.3MB/375MB 55f2b468da67 Extracting [=====================> ] 108.6MB/257.9MB 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B eabd8714fec9 Extracting [===========================> ] 207.8MB/375MB 55f2b468da67 Extracting [=====================> ] 110.3MB/257.9MB eabd8714fec9 Extracting [============================> ] 216.7MB/375MB 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB eabd8714fec9 Extracting [=============================> ] 217.8MB/375MB 55f2b468da67 Extracting [=====================> ] 112.5MB/257.9MB 7221d93db8a9 Pull complete c49e0ee60bfb Extracting [=> ] 3.899MB/107.3MB eabd8714fec9 Extracting [=============================> ] 221.2MB/375MB 55f2b468da67 Extracting [======================> ] 114.8MB/257.9MB 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B c49e0ee60bfb Extracting [==> ] 6.128MB/107.3MB 55f2b468da67 Extracting [=======================> ] 119.2MB/257.9MB eabd8714fec9 Extracting [=============================> ] 224.5MB/375MB 12c5c803443f Pull complete c49e0ee60bfb Extracting [===> ] 7.242MB/107.3MB eabd8714fec9 Extracting [==============================> ] 226.2MB/375MB 55f2b468da67 Extracting [=======================> ] 120.3MB/257.9MB c49e0ee60bfb Extracting [=====> ] 12.26MB/107.3MB eabd8714fec9 Extracting [==============================> ] 232.3MB/375MB 55f2b468da67 Extracting [========================> ] 124.8MB/257.9MB c49e0ee60bfb Extracting [=======> ] 16.15MB/107.3MB eabd8714fec9 Extracting [===============================> ] 237.9MB/375MB 55f2b468da67 Extracting [=========================> ] 129.8MB/257.9MB eabd8714fec9 Extracting [================================> ] 242.3MB/375MB c49e0ee60bfb Extracting [========> ] 17.83MB/107.3MB 55f2b468da67 Extracting [==========================> ] 134.3MB/257.9MB c49e0ee60bfb Extracting [========> ] 18.38MB/107.3MB 55f2b468da67 Extracting [==========================> ] 135.4MB/257.9MB eabd8714fec9 Extracting [================================> ] 244.5MB/375MB c49e0ee60bfb Extracting [===========> ] 25.62MB/107.3MB 55f2b468da67 Extracting [===========================> ] 140.9MB/257.9MB eabd8714fec9 Extracting [=================================> ] 247.9MB/375MB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB c49e0ee60bfb Extracting [===============> ] 32.31MB/107.3MB 55f2b468da67 Extracting [============================> ] 144.8MB/257.9MB eabd8714fec9 Extracting [=================================> ] 251.8MB/375MB c49e0ee60bfb Extracting [================> ] 36.21MB/107.3MB eabd8714fec9 Extracting [==================================> ] 256.2MB/375MB 55f2b468da67 Extracting [============================> ] 148.7MB/257.9MB c49e0ee60bfb Extracting [==================> ] 38.99MB/107.3MB 55f2b468da67 Extracting [=============================> ] 152.6MB/257.9MB eabd8714fec9 Extracting [===================================> ] 263.5MB/375MB c49e0ee60bfb Extracting [====================> ] 44.56MB/107.3MB 55f2b468da67 Extracting [==============================> ] 156MB/257.9MB eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB c49e0ee60bfb Extracting [======================> ] 47.91MB/107.3MB 55f2b468da67 Extracting [==============================> ] 158.8MB/257.9MB eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB c49e0ee60bfb Extracting [========================> ] 53.48MB/107.3MB 55f2b468da67 Extracting [===============================> ] 163.8MB/257.9MB eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB c49e0ee60bfb Extracting [===========================> ] 58.49MB/107.3MB 55f2b468da67 Extracting [================================> ] 168.2MB/257.9MB eabd8714fec9 Extracting [====================================> ] 273MB/375MB 7df673c7455d Pull complete 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB c49e0ee60bfb Extracting [=============================> ] 63.5MB/107.3MB c49e0ee60bfb Extracting [==============================> ] 65.18MB/107.3MB eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB c49e0ee60bfb Extracting [==============================> ] 66.29MB/107.3MB 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB c49e0ee60bfb Extracting [================================> ] 70.19MB/107.3MB e27c75a98748 Pull complete eabd8714fec9 Extracting [====================================> ] 275.2MB/375MB 55f2b468da67 Extracting [=================================> ] 171.6MB/257.9MB c49e0ee60bfb Extracting [==================================> ] 73.53MB/107.3MB eabd8714fec9 Extracting [=====================================> ] 279.1MB/375MB 55f2b468da67 Extracting [=================================> ] 173.2MB/257.9MB c49e0ee60bfb Extracting [===================================> ] 76.87MB/107.3MB eabd8714fec9 Extracting [=====================================> ] 284.7MB/375MB 55f2b468da67 Extracting [=================================> ] 174.9MB/257.9MB c49e0ee60bfb Extracting [=====================================> ] 79.66MB/107.3MB eabd8714fec9 Extracting [======================================> ] 287.4MB/375MB 55f2b468da67 Extracting [==================================> ] 176MB/257.9MB c49e0ee60bfb Extracting [======================================> ] 83.56MB/107.3MB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB eabd8714fec9 Extracting [======================================> ] 292.5MB/375MB 55f2b468da67 Extracting [==================================> ] 178.8MB/257.9MB c49e0ee60bfb Extracting [========================================> ] 87.46MB/107.3MB e73cb4a42719 Extracting [=> ] 3.899MB/109.1MB eabd8714fec9 Extracting [=======================================> ] 295.2MB/375MB 55f2b468da67 Extracting [===================================> ] 182.2MB/257.9MB c49e0ee60bfb Extracting [============================================> ] 95.26MB/107.3MB e73cb4a42719 Extracting [===> ] 7.799MB/109.1MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 55f2b468da67 Extracting [====================================> ] 186.1MB/257.9MB e73cb4a42719 Extracting [=====> ] 11.7MB/109.1MB c49e0ee60bfb Extracting [==============================================> ] 100.3MB/107.3MB eabd8714fec9 Extracting [=======================================> ] 298.6MB/375MB 55f2b468da67 Extracting [====================================> ] 190.5MB/257.9MB e73cb4a42719 Extracting [=======> ] 16.15MB/109.1MB c49e0ee60bfb Extracting [================================================> ] 103.6MB/107.3MB eabd8714fec9 Extracting [========================================> ] 300.3MB/375MB 55f2b468da67 Extracting [=====================================> ] 193.9MB/257.9MB e73cb4a42719 Extracting [========> ] 18.94MB/109.1MB eabd8714fec9 Extracting [========================================> ] 302.5MB/375MB c49e0ee60bfb Extracting [================================================> ] 104.7MB/107.3MB e73cb4a42719 Extracting [==========> ] 22.28MB/109.1MB 55f2b468da67 Extracting [=====================================> ] 195.5MB/257.9MB c49e0ee60bfb Extracting [=================================================> ] 107MB/107.3MB eabd8714fec9 Extracting [========================================> ] 304.2MB/375MB c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB e73cb4a42719 Extracting [===========> ] 25.62MB/109.1MB e73cb4a42719 Extracting [============> ] 26.74MB/109.1MB 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB e73cb4a42719 Extracting [==============> ] 30.64MB/109.1MB 55f2b468da67 Extracting [======================================> ] 199.4MB/257.9MB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB e73cb4a42719 Extracting [===============> ] 33.42MB/109.1MB 55f2b468da67 Extracting [=======================================> ] 201.7MB/257.9MB eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB e73cb4a42719 Extracting [=================> ] 38.44MB/109.1MB 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB e73cb4a42719 Extracting [====================> ] 44.56MB/109.1MB eabd8714fec9 Extracting [=========================================> ] 311.4MB/375MB 55f2b468da67 Extracting [=======================================> ] 204.4MB/257.9MB e73cb4a42719 Extracting [=======================> ] 50.69MB/109.1MB eabd8714fec9 Extracting [=========================================> ] 313.1MB/375MB 55f2b468da67 Extracting [=======================================> ] 206.1MB/257.9MB e73cb4a42719 Extracting [========================> ] 52.36MB/109.1MB eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB e73cb4a42719 Extracting [=========================> ] 54.59MB/109.1MB eabd8714fec9 Extracting [==========================================> ] 316.4MB/375MB 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB c49e0ee60bfb Pull complete e73cb4a42719 Extracting [==========================> ] 57.93MB/109.1MB eabd8714fec9 Extracting [==========================================> ] 319.8MB/375MB prometheus Pulled 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB e73cb4a42719 Extracting [===========================> ] 59.6MB/109.1MB e73cb4a42719 Extracting [==============================> ] 65.73MB/109.1MB e73cb4a42719 Extracting [=================================> ] 72.97MB/109.1MB eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB e73cb4a42719 Extracting [===================================> ] 76.87MB/109.1MB 55f2b468da67 Extracting [=========================================> ] 212.2MB/257.9MB 384497dbce3b Extracting [> ] 557.1kB/63.48MB eabd8714fec9 Extracting [===========================================> ] 326.4MB/375MB 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB e73cb4a42719 Extracting [=====================================> ] 80.77MB/109.1MB 384497dbce3b Extracting [> ] 1.114MB/63.48MB 55f2b468da67 Extracting [=========================================> ] 215MB/257.9MB e73cb4a42719 Extracting [=======================================> ] 85.23MB/109.1MB eabd8714fec9 Extracting [===========================================> ] 328.1MB/375MB 55f2b468da67 Extracting [=========================================> ] 215.6MB/257.9MB 55f2b468da67 Extracting [=========================================> ] 216.1MB/257.9MB e73cb4a42719 Extracting [=========================================> ] 91.36MB/109.1MB eabd8714fec9 Extracting [===========================================> ] 328.7MB/375MB 384497dbce3b Extracting [=> ] 1.671MB/63.48MB 55f2b468da67 Extracting [==========================================> ] 219.5MB/257.9MB e73cb4a42719 Extracting [==========================================> ] 93.03MB/109.1MB eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB 55f2b468da67 Extracting [==========================================> ] 221.2MB/257.9MB eabd8714fec9 Extracting [============================================> ] 330.9MB/375MB e73cb4a42719 Extracting [===========================================> ] 94.7MB/109.1MB 384497dbce3b Extracting [=> ] 2.228MB/63.48MB 55f2b468da67 Extracting [==========================================> ] 221.7MB/257.9MB e73cb4a42719 Extracting [===========================================> ] 95.26MB/109.1MB eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB 55f2b468da67 Extracting [===========================================> ] 223.9MB/257.9MB e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 384497dbce3b Extracting [==> ] 2.785MB/63.48MB e73cb4a42719 Extracting [============================================> ] 97.48MB/109.1MB eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB 55f2b468da67 Extracting [===========================================> ] 225.6MB/257.9MB e73cb4a42719 Extracting [=============================================> ] 100.3MB/109.1MB 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB eabd8714fec9 Extracting [============================================> ] 335.3MB/375MB 384497dbce3b Extracting [===> ] 4.456MB/63.48MB e73cb4a42719 Extracting [==============================================> ] 102.5MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB 384497dbce3b Extracting [===> ] 5.014MB/63.48MB 55f2b468da67 Extracting [============================================> ] 230.6MB/257.9MB e73cb4a42719 Extracting [================================================> ] 104.7MB/109.1MB 384497dbce3b Extracting [=====> ] 6.685MB/63.48MB 384497dbce3b Extracting [=====> ] 7.242MB/63.48MB eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB e73cb4a42719 Extracting [================================================> ] 105.3MB/109.1MB 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB 384497dbce3b Extracting [======> ] 7.799MB/63.48MB 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB e73cb4a42719 Extracting [=================================================> ] 107MB/109.1MB 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB 384497dbce3b Extracting [======> ] 8.356MB/63.48MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 55f2b468da67 Extracting [=============================================> ] 234MB/257.9MB 384497dbce3b Extracting [=======> ] 9.47MB/63.48MB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB e73cb4a42719 Extracting [=================================================> ] 108.1MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 384497dbce3b Extracting [=======> ] 10.03MB/63.48MB 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB 55f2b468da67 Extracting [==============================================> ] 237.3MB/257.9MB 384497dbce3b Extracting [=========> ] 11.7MB/63.48MB 55f2b468da67 Extracting [==============================================> ] 241.2MB/257.9MB 384497dbce3b Extracting [===========> ] 14.48MB/63.48MB 55f2b468da67 Extracting [==============================================> ] 241.8MB/257.9MB 384497dbce3b Extracting [============> ] 16.15MB/63.48MB eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB eabd8714fec9 Extracting [==============================================> ] 350.4MB/375MB 384497dbce3b Extracting [=============> ] 17.27MB/63.48MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB eabd8714fec9 Extracting [===============================================> ] 353.2MB/375MB 384497dbce3b Extracting [==============> ] 18.38MB/63.48MB 55f2b468da67 Extracting [================================================> ] 251.8MB/257.9MB 384497dbce3b Extracting [================> ] 21.17MB/63.48MB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 384497dbce3b Extracting [===================> ] 24.51MB/63.48MB 55f2b468da67 Extracting [=================================================> ] 256.2MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB eabd8714fec9 Extracting [================================================> ] 362.1MB/375MB 384497dbce3b Extracting [======================> ] 28.41MB/63.48MB eabd8714fec9 Extracting [=================================================> ] 368.2MB/375MB 384497dbce3b Extracting [=========================> ] 31.75MB/63.48MB eabd8714fec9 Extracting [=================================================> ] 373.2MB/375MB 384497dbce3b Extracting [===========================> ] 35.09MB/63.48MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB e73cb4a42719 Pull complete 384497dbce3b Extracting [============================> ] 36.21MB/63.48MB 384497dbce3b Extracting [=============================> ] 37.32MB/63.48MB 384497dbce3b Extracting [================================> ] 41.22MB/63.48MB 384497dbce3b Extracting [===================================> ] 45.12MB/63.48MB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 384497dbce3b Extracting [======================================> ] 49.02MB/63.48MB 384497dbce3b Extracting [=======================================> ] 49.58MB/63.48MB 384497dbce3b Extracting [=========================================> ] 52.92MB/63.48MB 55f2b468da67 Pull complete 384497dbce3b Extracting [==============================================> ] 59.05MB/63.48MB 384497dbce3b Extracting [=================================================> ] 62.39MB/63.48MB 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 82bfc142787e Extracting [> ] 98.3kB/8.613MB 82bfc142787e Extracting [===============> ] 2.753MB/8.613MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB eabd8714fec9 Pull complete a83b68436f09 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 384497dbce3b Pull complete 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B 82bfc142787e Pull complete 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 45fd2fec8a19 Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 055b9255fa03 Pull complete 787d6bee9571 Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 46baca71a4ef Pull complete 8f10199ed94b Extracting [========> ] 1.573MB/8.768MB 13ff0988aaea Pull complete 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB b176d7edde70 Pull complete b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB grafana Pulled 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB b0e0ef7895f4 Extracting [==================> ] 13.37MB/37.01MB 4b82842ab819 Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B f963a77d2726 Pull complete b0e0ef7895f4 Extracting [======================================> ] 28.31MB/37.01MB f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 7e568a0dc8fb Pull complete postgres Pulled b0e0ef7895f4 Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB f3a82e9f1761 Extracting [=================> ] 15.14MB/44.41MB c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B f3a82e9f1761 Extracting [===================================> ] 31.65MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB e040ea11fa10 Pull complete 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 09d5a3f70313 Extracting [=====> ] 11.7MB/109.2MB 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 09d5a3f70313 Extracting [============> ] 27.85MB/109.2MB 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 09d5a3f70313 Extracting [====================> ] 44.56MB/109.2MB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Pull complete 09d5a3f70313 Extracting [=============================> ] 63.5MB/109.2MB da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 09d5a3f70313 Extracting [=====================================> ] 82.44MB/109.2MB da3ed5db7103 Extracting [=====> ] 14.48MB/127.4MB 09d5a3f70313 Extracting [=============================================> ] 99.16MB/109.2MB da3ed5db7103 Extracting [==========> ] 26.18MB/127.4MB 09d5a3f70313 Extracting [================================================> ] 107MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB da3ed5db7103 Extracting [===============> ] 40.11MB/127.4MB 09d5a3f70313 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB da3ed5db7103 Extracting [======================> ] 56.82MB/127.4MB 356f5c2c843b Pull complete kafka Pulled da3ed5db7103 Extracting [=============================> ] 75.2MB/127.4MB da3ed5db7103 Extracting [=====================================> ] 95.26MB/127.4MB da3ed5db7103 Extracting [============================================> ] 112.5MB/127.4MB da3ed5db7103 Extracting [===============================================> ] 120.9MB/127.4MB da3ed5db7103 Extracting [=================================================> ] 125.3MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container zookeeper Creating Container postgres Creating Container prometheus Creating Container postgres Created Container zookeeper Created Container prometheus Created Container kafka Creating Container grafana Creating Container policy-db-migrator Creating Container grafana Created Container policy-db-migrator Created Container policy-api Creating Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-xacml-pdp Creating Container policy-xacml-pdp Created Container prometheus Starting Container postgres Starting Container zookeeper Starting Container zookeeper Started Container kafka Starting Container kafka Started Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-xacml-pdp Starting Container prometheus Started Container grafana Starting Container policy-xacml-pdp Started Container grafana Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 1 minute for xacml-pdp to start... Checking if REST port 30004 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute Cloning into '/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/csit/resources/tests/models'... Building robot framework docker image sha256:53c0454e4fa8e231e0d6aba9040e3408ace50506b8a72de13f7efe5ee54e35de top - 18:34:27 up 4 min, 0 users, load average: 2.24, 1.68, 0.73 Tasks: 230 total, 1 running, 151 sleeping, 0 stopped, 0 zombie %Cpu(s): 13.9 us, 3.1 sy, 0.0 ni, 77.7 id, 5.1 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.5G 21G 27M 7.1G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 42b6b53ec07f policy-xacml-pdp 1.87% 176.2MiB / 31.41GiB 0.55% 45.1kB / 55.5kB 0B / 4.1kB 51 daf2f447f282 policy-pap 0.97% 517MiB / 31.41GiB 1.61% 2.14MB / 1.06MB 0B / 139MB 68 91e6420c4be3 policy-api 0.20% 415.2MiB / 31.41GiB 1.29% 1.14MB / 986kB 0B / 0B 59 c97c08af22ba kafka 1.53% 390MiB / 31.41GiB 1.21% 186kB / 175kB 0B / 582kB 83 331bcc1edeb1 grafana 1.13% 109.5MiB / 31.41GiB 0.34% 19.1MB / 201kB 0B / 31.4MB 22 574b837ab926 zookeeper 0.17% 84.4MiB / 31.41GiB 0.26% 55.3kB / 46.9kB 4.1kB / 430kB 62 8bd6e6ee3690 prometheus 0.00% 20.32MiB / 31.41GiB 0.06% 62.8kB / 3.44kB 225kB / 0B 11 f42bcd06be9b postgres 0.54% 86.13MiB / 31.41GiB 0.27% 2.56MB / 3.74MB 0B / 157MB 26 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | MakeTopics :: Creates the Policy topics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteXacmlPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | policy-csit | 4 tests, 4 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | policy-csit | 6 tests, 6 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-16T18:32:43.677146236Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-16T18:32:43Z grafana | logger=settings t=2025-06-16T18:32:43.677449748Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-16T18:32:43.677460838Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-16T18:32:43.677464858Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-16T18:32:43.677468238Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-16T18:32:43.677471618Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-16T18:32:43.677474928Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-16T18:32:43.677477618Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-16T18:32:43.677481108Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-16T18:32:43.677485458Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-16T18:32:43.677488529Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-16T18:32:43.677491829Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-16T18:32:43.677495579Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-16T18:32:43.677508959Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-16T18:32:43.677512129Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-16T18:32:43.677515129Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-16T18:32:43.677518199Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-16T18:32:43.677521419Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-16T18:32:43.677524929Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-16T18:32:43.677867071Z level=info msg=FeatureToggles logRowsPopoverMenu=true logsPanelControls=true recoveryThreshold=true prometheusAzureOverrideAudience=true dashgpt=true onPremToCloudMigrations=true formatString=true dashboardScene=true alertRuleRestore=true angularDeprecationUI=true pluginsDetailsRightPanel=true logsExploreTableVisualisation=true cloudWatchCrossAccountQuerying=true logsContextDatasourceUi=true prometheusUsesCombobox=true alertingSimplifiedRouting=true lokiLabelNamesQueryApi=true awsAsyncQueryCaching=true transformationsRedesign=true azureMonitorEnableUserAuth=true annotationPermissionUpdate=true correlations=true alertingRulePermanentlyDelete=true cloudWatchNewLabelParsing=true nestedFolders=true alertingInsights=true ssoSettingsSAML=true newDashboardSharingComponent=true alertingQueryAndExpressionsStepMode=true newFiltersUI=true unifiedStorageSearchPermissionFiltering=true kubernetesClientDashboardsFolders=true alertingRuleVersionHistoryRestore=true azureMonitorPrometheusExemplars=true grafanaconThemes=true alertingUIOptimizeReducer=true useSessionStorageForRedirection=true dashboardSceneForViewers=true kubernetesPlaylists=true influxdbBackendMigration=true dashboardSceneSolo=true alertingNotificationsStepMode=true externalCorePlugins=true publicDashboardsScene=true failWrongDSUID=true unifiedRequestLog=true cloudWatchRoundUpEndTime=true pinNavItems=true lokiStructuredMetadata=true promQLScope=true dataplaneFrontendFallback=true reportingUseRawTimeRange=true lokiQuerySplitting=true recordedQueriesMulti=true lokiQueryHints=true ssoSettingsApi=true alertingApiServer=true tlsMemcached=true groupToNestedTableTransformation=true logsInfiniteScrolling=true preinstallAutoUpdate=true panelMonitoring=true addFieldFromCalculationStatFunctions=true alertingRuleRecoverDeleted=true newPDFRendering=true grafana | logger=sqlstore t=2025-06-16T18:32:43.677924492Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-16T18:32:43.677937462Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-16T18:32:43.679576545Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-16T18:32:43.679590235Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-16T18:32:43.680268131Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-16T18:32:43.681151958Z level=info msg="Migration successfully executed" id="create migration_log table" duration=883.147µs grafana | logger=migrator t=2025-06-16T18:32:43.684647167Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-16T18:32:43.685272852Z level=info msg="Migration successfully executed" id="create user table" duration=625.065µs grafana | logger=migrator t=2025-06-16T18:32:43.689635007Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-16T18:32:43.690333093Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=697.876µs grafana | logger=migrator t=2025-06-16T18:32:43.696362593Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-16T18:32:43.697610583Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.24736ms grafana | logger=migrator t=2025-06-16T18:32:43.701475995Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-16T18:32:43.702581094Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.104179ms grafana | logger=migrator t=2025-06-16T18:32:43.706213334Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-16T18:32:43.707272443Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.058779ms grafana | logger=migrator t=2025-06-16T18:32:43.713667866Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-16T18:32:43.716344558Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.676232ms grafana | logger=migrator t=2025-06-16T18:32:43.719413953Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-16T18:32:43.720666023Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.2513ms grafana | logger=migrator t=2025-06-16T18:32:43.724337553Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-16T18:32:43.7251776Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=847.297µs grafana | logger=migrator t=2025-06-16T18:32:43.728642348Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-16T18:32:43.729182843Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=540.155µs grafana | logger=migrator t=2025-06-16T18:32:43.732261518Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-16T18:32:43.732572681Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=310.653µs grafana | logger=migrator t=2025-06-16T18:32:43.738335218Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-16T18:32:43.738815962Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=480.564µs grafana | logger=migrator t=2025-06-16T18:32:43.741766116Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-16T18:32:43.742623233Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=856.447µs grafana | logger=migrator t=2025-06-16T18:32:43.745597327Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-16T18:32:43.745649868Z level=info msg="Migration successfully executed" id="Update user table charset" duration=52.841µs grafana | logger=migrator t=2025-06-16T18:32:43.748679613Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-16T18:32:43.749480799Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=800.676µs grafana | logger=migrator t=2025-06-16T18:32:43.755969143Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-16T18:32:43.756334126Z level=info msg="Migration successfully executed" id="Add missing user data" duration=368.073µs grafana | logger=migrator t=2025-06-16T18:32:43.782964245Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-16T18:32:43.78609718Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=3.132275ms grafana | logger=migrator t=2025-06-16T18:32:43.790244234Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-16T18:32:43.79098551Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=740.786µs grafana | logger=migrator t=2025-06-16T18:32:43.795692039Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-16T18:32:43.796852179Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.161839ms grafana | logger=migrator t=2025-06-16T18:32:43.800938492Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-16T18:32:43.810681772Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.74361ms grafana | logger=migrator t=2025-06-16T18:32:43.814436654Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-16T18:32:43.81531132Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=875.046µs grafana | logger=migrator t=2025-06-16T18:32:43.818961071Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-16T18:32:43.819122632Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=158.791µs grafana | logger=migrator t=2025-06-16T18:32:43.823380927Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-16T18:32:43.824064693Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=686.186µs grafana | logger=migrator t=2025-06-16T18:32:43.829498927Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-16T18:32:43.831614975Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=2.114578ms grafana | logger=migrator t=2025-06-16T18:32:43.835521257Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-16T18:32:43.836264262Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=742.655µs grafana | logger=migrator t=2025-06-16T18:32:43.841490575Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-16T18:32:43.842004939Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=514.214µs grafana | logger=migrator t=2025-06-16T18:32:43.845516999Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-16T18:32:43.845975342Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=457.724µs grafana | logger=migrator t=2025-06-16T18:32:43.84927655Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-16T18:32:43.849645833Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=369.193µs grafana | logger=migrator t=2025-06-16T18:32:43.85428399Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-16T18:32:43.855541091Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.256961ms grafana | logger=migrator t=2025-06-16T18:32:43.860463382Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-16T18:32:43.86155462Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.091328ms grafana | logger=migrator t=2025-06-16T18:32:43.865362772Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-16T18:32:43.866068808Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=705.676µs grafana | logger=migrator t=2025-06-16T18:32:43.869740677Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-16T18:32:43.870484014Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=743.217µs grafana | logger=migrator t=2025-06-16T18:32:43.875314243Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-16T18:32:43.876265091Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=949.688µs grafana | logger=migrator t=2025-06-16T18:32:43.880770248Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-16T18:32:43.880812919Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=40.281µs grafana | logger=migrator t=2025-06-16T18:32:43.885230125Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-16T18:32:43.886640076Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.409551ms grafana | logger=migrator t=2025-06-16T18:32:43.890404418Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-16T18:32:43.891448266Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.045168ms grafana | logger=migrator t=2025-06-16T18:32:43.896223525Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-16T18:32:43.896701158Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=477.673µs grafana | logger=migrator t=2025-06-16T18:32:43.901345497Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-16T18:32:43.902295264Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=949.417µs grafana | logger=migrator t=2025-06-16T18:32:43.907119495Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T18:32:43.911944764Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.824559ms grafana | logger=migrator t=2025-06-16T18:32:43.915959577Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-16T18:32:43.917041216Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.064559ms grafana | logger=migrator t=2025-06-16T18:32:43.921692854Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-16T18:32:43.922420891Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=727.817µs grafana | logger=migrator t=2025-06-16T18:32:43.925952659Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-16T18:32:43.926662875Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=709.676µs grafana | logger=migrator t=2025-06-16T18:32:43.93092109Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-16T18:32:43.931644526Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=723.286µs grafana | logger=migrator t=2025-06-16T18:32:43.935138125Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-16T18:32:43.93582149Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=683.155µs grafana | logger=migrator t=2025-06-16T18:32:43.939725872Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-16T18:32:43.940092765Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=366.643µs grafana | logger=migrator t=2025-06-16T18:32:43.944884455Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-16T18:32:43.946133695Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=1.25279ms grafana | logger=migrator t=2025-06-16T18:32:43.951657121Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-16T18:32:43.952037424Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=380.543µs grafana | logger=migrator t=2025-06-16T18:32:43.955447271Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-16T18:32:43.956074857Z level=info msg="Migration successfully executed" id="create star table" duration=627.186µs grafana | logger=migrator t=2025-06-16T18:32:43.959604355Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-16T18:32:43.960347552Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=745.227µs grafana | logger=migrator t=2025-06-16T18:32:43.964732648Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-16T18:32:43.966851135Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=2.116867ms grafana | logger=migrator t=2025-06-16T18:32:43.970750267Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-16T18:32:43.972955216Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=2.204469ms grafana | logger=migrator t=2025-06-16T18:32:43.976479494Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-16T18:32:43.977845215Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.365421ms grafana | logger=migrator t=2025-06-16T18:32:43.981743148Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-16T18:32:43.982695266Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=951.698µs grafana | logger=migrator t=2025-06-16T18:32:43.987251523Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-16T18:32:43.987972039Z level=info msg="Migration successfully executed" id="create org table v1" duration=720.236µs grafana | logger=migrator t=2025-06-16T18:32:43.991624729Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-16T18:32:43.992316775Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=689.336µs grafana | logger=migrator t=2025-06-16T18:32:43.998532735Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-16T18:32:43.999745945Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.2134ms grafana | logger=migrator t=2025-06-16T18:32:44.003608787Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-16T18:32:44.004971329Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.361602ms grafana | logger=migrator t=2025-06-16T18:32:44.009566666Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-16T18:32:44.010632375Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.065879ms grafana | logger=migrator t=2025-06-16T18:32:44.014263904Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-16T18:32:44.015093831Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=828.957µs grafana | logger=migrator t=2025-06-16T18:32:44.018582979Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-16T18:32:44.018612299Z level=info msg="Migration successfully executed" id="Update org table charset" duration=30.1µs grafana | logger=migrator t=2025-06-16T18:32:44.022025647Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-16T18:32:44.022049577Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=24.49µs grafana | logger=migrator t=2025-06-16T18:32:44.026870966Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-16T18:32:44.027069208Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=198.192µs grafana | logger=migrator t=2025-06-16T18:32:44.030564106Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-16T18:32:44.031732006Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.16721ms grafana | logger=migrator t=2025-06-16T18:32:44.035410145Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-16T18:32:44.03709308Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.683295ms grafana | logger=migrator t=2025-06-16T18:32:44.045435727Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-16T18:32:44.046459725Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.022918ms grafana | logger=migrator t=2025-06-16T18:32:44.050573698Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-16T18:32:44.051378065Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=804.217µs grafana | logger=migrator t=2025-06-16T18:32:44.054909113Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-16T18:32:44.05573225Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=824.517µs grafana | logger=migrator t=2025-06-16T18:32:44.059258708Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-16T18:32:44.059960925Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=701.547µs grafana | logger=migrator t=2025-06-16T18:32:44.064419261Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-16T18:32:44.069870015Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.449374ms grafana | logger=migrator t=2025-06-16T18:32:44.073527285Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-16T18:32:44.074296531Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=766.576µs grafana | logger=migrator t=2025-06-16T18:32:44.078849397Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-16T18:32:44.079772025Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=921.708µs grafana | logger=migrator t=2025-06-16T18:32:44.083723047Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-16T18:32:44.084633425Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=910.548µs grafana | logger=migrator t=2025-06-16T18:32:44.088017902Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-16T18:32:44.088423315Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=404.293µs grafana | logger=migrator t=2025-06-16T18:32:44.091659192Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-16T18:32:44.092391357Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=731.595µs grafana | logger=migrator t=2025-06-16T18:32:44.096470181Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-16T18:32:44.096486491Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=17.081µs grafana | logger=migrator t=2025-06-16T18:32:44.099619946Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-16T18:32:44.101425281Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.804595ms grafana | logger=migrator t=2025-06-16T18:32:44.104767998Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-16T18:32:44.108165075Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.414407ms grafana | logger=migrator t=2025-06-16T18:32:44.144307738Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-16T18:32:44.148600053Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=4.291515ms grafana | logger=migrator t=2025-06-16T18:32:44.153197871Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-16T18:32:44.153936316Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=738.225µs grafana | logger=migrator t=2025-06-16T18:32:44.157401625Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-16T18:32:44.160026405Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.62283ms grafana | logger=migrator t=2025-06-16T18:32:44.163437793Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-16T18:32:44.164590313Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.15141ms grafana | logger=migrator t=2025-06-16T18:32:44.169454662Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-16T18:32:44.170593981Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.138629ms grafana | logger=migrator t=2025-06-16T18:32:44.174777765Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-16T18:32:44.174801985Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=24.78µs grafana | logger=migrator t=2025-06-16T18:32:44.177982231Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-16T18:32:44.178006191Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=24.83µs grafana | logger=migrator t=2025-06-16T18:32:44.182011834Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-16T18:32:44.18391477Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.902466ms grafana | logger=migrator t=2025-06-16T18:32:44.186865974Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-16T18:32:44.188805069Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.936185ms grafana | logger=migrator t=2025-06-16T18:32:44.191918784Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-16T18:32:44.19387021Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.953196ms grafana | logger=migrator t=2025-06-16T18:32:44.199205673Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-16T18:32:44.201681533Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.47553ms grafana | logger=migrator t=2025-06-16T18:32:44.20492734Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-16T18:32:44.205129792Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=201.942µs grafana | logger=migrator t=2025-06-16T18:32:44.208327147Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-16T18:32:44.209123884Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=796.147µs grafana | logger=migrator t=2025-06-16T18:32:44.213868812Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-16T18:32:44.214596928Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=727.516µs grafana | logger=migrator t=2025-06-16T18:32:44.218057037Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-16T18:32:44.218081647Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=24.87µs grafana | logger=migrator t=2025-06-16T18:32:44.222547933Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-16T18:32:44.224354647Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.805054ms grafana | logger=migrator t=2025-06-16T18:32:44.228989705Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-16T18:32:44.229751151Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=761.536µs grafana | logger=migrator t=2025-06-16T18:32:44.23332857Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T18:32:44.238595383Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.266203ms grafana | logger=migrator t=2025-06-16T18:32:44.242085091Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-16T18:32:44.242801908Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=716.506µs grafana | logger=migrator t=2025-06-16T18:32:44.246331696Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-16T18:32:44.247108562Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=776.276µs grafana | logger=migrator t=2025-06-16T18:32:44.252476305Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-16T18:32:44.25419733Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.719845ms grafana | logger=migrator t=2025-06-16T18:32:44.258240443Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-16T18:32:44.258791107Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=549.924µs grafana | logger=migrator t=2025-06-16T18:32:44.262398296Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-16T18:32:44.26288909Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=490.064µs grafana | logger=migrator t=2025-06-16T18:32:44.26787636Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-16T18:32:44.271770312Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.897082ms grafana | logger=migrator t=2025-06-16T18:32:44.275094929Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-16T18:32:44.2764051Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.309721ms grafana | logger=migrator t=2025-06-16T18:32:44.282834151Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-16T18:32:44.283078353Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=243.652µs grafana | logger=migrator t=2025-06-16T18:32:44.286064697Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-16T18:32:44.286494691Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=429.874µs grafana | logger=migrator t=2025-06-16T18:32:44.29006545Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-16T18:32:44.292315158Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=2.252628ms grafana | logger=migrator t=2025-06-16T18:32:44.297918914Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-16T18:32:44.300436114Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.51659ms grafana | logger=migrator t=2025-06-16T18:32:44.305175383Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-16T18:32:44.306849246Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=1.673224ms grafana | logger=migrator t=2025-06-16T18:32:44.309943962Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-16T18:32:44.310673257Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=729.005µs grafana | logger=migrator t=2025-06-16T18:32:44.314006454Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-16T18:32:44.316346633Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.339519ms grafana | logger=migrator t=2025-06-16T18:32:44.3209598Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-16T18:32:44.32337633Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.41581ms grafana | logger=migrator t=2025-06-16T18:32:44.326681496Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-16T18:32:44.327186451Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=500.985µs grafana | logger=migrator t=2025-06-16T18:32:44.330598128Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-16T18:32:44.333036128Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.43652ms grafana | logger=migrator t=2025-06-16T18:32:44.336323345Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-16T18:32:44.337443814Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=1.119529ms grafana | logger=migrator t=2025-06-16T18:32:44.34193321Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-16T18:32:44.342508895Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=574.885µs grafana | logger=migrator t=2025-06-16T18:32:44.345995163Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-16T18:32:44.347536496Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.541543ms grafana | logger=migrator t=2025-06-16T18:32:44.352624237Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-16T18:32:44.354151649Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.592992ms grafana | logger=migrator t=2025-06-16T18:32:44.359280671Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-16T18:32:44.36034405Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.063119ms grafana | logger=migrator t=2025-06-16T18:32:44.363794318Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-16T18:32:44.364629845Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=835.377µs grafana | logger=migrator t=2025-06-16T18:32:44.367992622Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-16T18:32:44.368857109Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=863.807µs grafana | logger=migrator t=2025-06-16T18:32:44.373447716Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-16T18:32:44.380324092Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.875436ms grafana | logger=migrator t=2025-06-16T18:32:44.383975481Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-16T18:32:44.384920339Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=944.838µs grafana | logger=migrator t=2025-06-16T18:32:44.38993264Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-16T18:32:44.390853037Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=917.337µs grafana | logger=migrator t=2025-06-16T18:32:44.394243095Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-16T18:32:44.397273699Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=3.025344ms grafana | logger=migrator t=2025-06-16T18:32:44.402043487Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-16T18:32:44.403069286Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.024979ms grafana | logger=migrator t=2025-06-16T18:32:44.407737734Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-16T18:32:44.411072741Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.334327ms grafana | logger=migrator t=2025-06-16T18:32:44.415618608Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-16T18:32:44.418147558Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.52783ms grafana | logger=migrator t=2025-06-16T18:32:44.421254013Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-16T18:32:44.421352064Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=98.341µs grafana | logger=migrator t=2025-06-16T18:32:44.426335185Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-16T18:32:44.426694398Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=358.533µs grafana | logger=migrator t=2025-06-16T18:32:44.430941882Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-16T18:32:44.43570979Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.768718ms grafana | logger=migrator t=2025-06-16T18:32:44.439622222Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-16T18:32:44.439972935Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=350.063µs grafana | logger=migrator t=2025-06-16T18:32:44.443170291Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-16T18:32:44.443454564Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=283.683µs grafana | logger=migrator t=2025-06-16T18:32:44.448149231Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-16T18:32:44.450698932Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.548981ms grafana | logger=migrator t=2025-06-16T18:32:44.454979417Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-16T18:32:44.455261369Z level=info msg="Migration successfully executed" id="Update uid value" duration=281.682µs grafana | logger=migrator t=2025-06-16T18:32:44.458588676Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-16T18:32:44.459466803Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=877.847µs grafana | logger=migrator t=2025-06-16T18:32:44.496865777Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-16T18:32:44.499120845Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=2.255218ms grafana | logger=migrator t=2025-06-16T18:32:44.503665852Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-16T18:32:44.508221559Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=4.558627ms grafana | logger=migrator t=2025-06-16T18:32:44.511704927Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-16T18:32:44.514789852Z level=info msg="Migration successfully executed" id="Add api_version column" duration=3.084005ms grafana | logger=migrator t=2025-06-16T18:32:44.521713918Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-16T18:32:44.521803159Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=90.471µs grafana | logger=migrator t=2025-06-16T18:32:44.526699428Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-16T18:32:44.528241411Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.541173ms grafana | logger=migrator t=2025-06-16T18:32:44.53183142Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-16T18:32:44.532512455Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=681.025µs grafana | logger=migrator t=2025-06-16T18:32:44.535707352Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-16T18:32:44.536342907Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=636.895µs grafana | logger=migrator t=2025-06-16T18:32:44.541425418Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-16T18:32:44.54289626Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.472942ms grafana | logger=migrator t=2025-06-16T18:32:44.546792012Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-16T18:32:44.54784069Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.048777ms grafana | logger=migrator t=2025-06-16T18:32:44.552427547Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-16T18:32:44.553375194Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=947.387µs grafana | logger=migrator t=2025-06-16T18:32:44.558132824Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-16T18:32:44.558780118Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=646.574µs grafana | logger=migrator t=2025-06-16T18:32:44.561612611Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-16T18:32:44.570940287Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=9.326986ms grafana | logger=migrator t=2025-06-16T18:32:44.577008866Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-16T18:32:44.578164246Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.15299ms grafana | logger=migrator t=2025-06-16T18:32:44.582782203Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-16T18:32:44.584228714Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.446531ms grafana | logger=migrator t=2025-06-16T18:32:44.587417801Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-16T18:32:44.588279617Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=861.666µs grafana | logger=migrator t=2025-06-16T18:32:44.593275308Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-16T18:32:44.594290797Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.014349ms grafana | logger=migrator t=2025-06-16T18:32:44.597916026Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-16T18:32:44.598618112Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=701.506µs grafana | logger=migrator t=2025-06-16T18:32:44.602948417Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-16T18:32:44.603768963Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=820.246µs grafana | logger=migrator t=2025-06-16T18:32:44.608474432Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-16T18:32:44.608585512Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=110.32µs grafana | logger=migrator t=2025-06-16T18:32:44.61200985Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-16T18:32:44.614713492Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.702912ms grafana | logger=migrator t=2025-06-16T18:32:44.618197731Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-16T18:32:44.620806112Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.607662ms grafana | logger=migrator t=2025-06-16T18:32:44.62682221Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-16T18:32:44.627104372Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=281.552µs grafana | logger=migrator t=2025-06-16T18:32:44.631031534Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-16T18:32:44.634689703Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.657339ms grafana | logger=migrator t=2025-06-16T18:32:44.637809969Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-16T18:32:44.640523242Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.712832ms grafana | logger=migrator t=2025-06-16T18:32:44.643642607Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-16T18:32:44.644452183Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=808.906µs grafana | logger=migrator t=2025-06-16T18:32:44.649478194Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-16T18:32:44.65018295Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=701.466µs grafana | logger=migrator t=2025-06-16T18:32:44.654564805Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-16T18:32:44.656159348Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.593653ms grafana | logger=migrator t=2025-06-16T18:32:44.662313278Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-16T18:32:44.66392211Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.608302ms grafana | logger=migrator t=2025-06-16T18:32:44.667713991Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-16T18:32:44.669223944Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.509863ms grafana | logger=migrator t=2025-06-16T18:32:44.672775843Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-16T18:32:44.673658309Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=882.396µs grafana | logger=migrator t=2025-06-16T18:32:44.676907537Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-16T18:32:44.676962387Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=54.98µs grafana | logger=migrator t=2025-06-16T18:32:44.682828535Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-16T18:32:44.682981846Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=154.871µs grafana | logger=migrator t=2025-06-16T18:32:44.68841103Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-16T18:32:44.691541285Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.129665ms grafana | logger=migrator t=2025-06-16T18:32:44.694758191Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-16T18:32:44.697616024Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.857293ms grafana | logger=migrator t=2025-06-16T18:32:44.702913237Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-16T18:32:44.702986958Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=74.151µs grafana | logger=migrator t=2025-06-16T18:32:44.706273375Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-16T18:32:44.707131801Z level=info msg="Migration successfully executed" id="create quota table v1" duration=857.436µs grafana | logger=migrator t=2025-06-16T18:32:44.71067289Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-16T18:32:44.71184271Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.16926ms grafana | logger=migrator t=2025-06-16T18:32:44.715139946Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-16T18:32:44.715208457Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=69.091µs grafana | logger=migrator t=2025-06-16T18:32:44.719645143Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-16T18:32:44.72054435Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=898.687µs grafana | logger=migrator t=2025-06-16T18:32:44.723712416Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-16T18:32:44.724603863Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=890.977µs grafana | logger=migrator t=2025-06-16T18:32:44.728107371Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-16T18:32:44.731412259Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.304108ms grafana | logger=migrator t=2025-06-16T18:32:44.738807038Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-16T18:32:44.738918409Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=108.301µs grafana | logger=migrator t=2025-06-16T18:32:44.742065244Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-16T18:32:44.742480888Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=415.214µs grafana | logger=migrator t=2025-06-16T18:32:44.745655874Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-16T18:32:44.759902979Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=14.247405ms grafana | logger=migrator t=2025-06-16T18:32:44.765312123Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-16T18:32:44.765949058Z level=info msg="Migration successfully executed" id="create session table" duration=636.615µs grafana | logger=migrator t=2025-06-16T18:32:44.769095653Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-16T18:32:44.769231675Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=118.452µs grafana | logger=migrator t=2025-06-16T18:32:44.772159229Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-16T18:32:44.77231772Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=157.841µs grafana | logger=migrator t=2025-06-16T18:32:44.780143433Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-16T18:32:44.781249992Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.106399ms grafana | logger=migrator t=2025-06-16T18:32:44.785020233Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-16T18:32:44.78724301Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=2.221397ms grafana | logger=migrator t=2025-06-16T18:32:44.791391494Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-16T18:32:44.791582076Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=193.992µs grafana | logger=migrator t=2025-06-16T18:32:44.795128265Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-16T18:32:44.795267006Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=137.751µs grafana | logger=migrator t=2025-06-16T18:32:44.800291447Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-16T18:32:44.803916506Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.624349ms grafana | logger=migrator t=2025-06-16T18:32:44.807924868Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-16T18:32:44.811384687Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.456469ms grafana | logger=migrator t=2025-06-16T18:32:44.850695385Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-16T18:32:44.851242629Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=549.714µs grafana | logger=migrator t=2025-06-16T18:32:44.857782933Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-16T18:32:44.857980354Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=196.561µs grafana | logger=migrator t=2025-06-16T18:32:44.861403531Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-16T18:32:44.86244866Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.044849ms grafana | logger=migrator t=2025-06-16T18:32:44.865571125Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-16T18:32:44.865650196Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=79.311µs grafana | logger=migrator t=2025-06-16T18:32:44.868802152Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-16T18:32:44.872223919Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.421227ms grafana | logger=migrator t=2025-06-16T18:32:44.878175247Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-16T18:32:44.878509411Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=334.154µs grafana | logger=migrator t=2025-06-16T18:32:44.882839596Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-16T18:32:44.886093272Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.252976ms grafana | logger=migrator t=2025-06-16T18:32:44.889212307Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-16T18:32:44.891580417Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.36597ms grafana | logger=migrator t=2025-06-16T18:32:44.896038953Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-16T18:32:44.896102053Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=63.92µs grafana | logger=migrator t=2025-06-16T18:32:44.898925196Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-16T18:32:44.899894134Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=968.868µs grafana | logger=migrator t=2025-06-16T18:32:44.904748073Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-16T18:32:44.907813679Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=3.064776ms grafana | logger=migrator t=2025-06-16T18:32:44.91299499Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-16T18:32:44.914296781Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.301711ms grafana | logger=migrator t=2025-06-16T18:32:44.917723199Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-16T18:32:44.918884658Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.160989ms grafana | logger=migrator t=2025-06-16T18:32:44.922137535Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-16T18:32:44.923172684Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.034958ms grafana | logger=migrator t=2025-06-16T18:32:44.927908222Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-16T18:32:44.929403624Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.495153ms grafana | logger=migrator t=2025-06-16T18:32:44.933143914Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-16T18:32:44.9338712Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=726.876µs grafana | logger=migrator t=2025-06-16T18:32:44.937220377Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-16T18:32:44.938128464Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=907.307µs grafana | logger=migrator t=2025-06-16T18:32:44.942738761Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-16T18:32:44.944072632Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.337431ms grafana | logger=migrator t=2025-06-16T18:32:44.947903273Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-16T18:32:44.960237183Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=12.33434ms grafana | logger=migrator t=2025-06-16T18:32:44.964264726Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-16T18:32:44.964948781Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=681.735µs grafana | logger=migrator t=2025-06-16T18:32:44.969295446Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-16T18:32:44.970255454Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=959.168µs grafana | logger=migrator t=2025-06-16T18:32:44.973755653Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-16T18:32:44.974133946Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=377.413µs grafana | logger=migrator t=2025-06-16T18:32:44.978002648Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-16T18:32:44.978670333Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=666.745µs grafana | logger=migrator t=2025-06-16T18:32:44.982927777Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-16T18:32:44.984276078Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.346461ms grafana | logger=migrator t=2025-06-16T18:32:44.988084589Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-16T18:32:44.993034069Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.95001ms grafana | logger=migrator t=2025-06-16T18:32:44.997053482Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-16T18:32:45.00044708Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.393338ms grafana | logger=migrator t=2025-06-16T18:32:45.004468512Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-16T18:32:45.007267705Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.797863ms grafana | logger=migrator t=2025-06-16T18:32:45.01160168Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-16T18:32:45.015368819Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.766439ms grafana | logger=migrator t=2025-06-16T18:32:45.018630767Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-16T18:32:45.019768446Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.16517ms grafana | logger=migrator t=2025-06-16T18:32:45.024795946Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-16T18:32:45.024869027Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=73.211µs grafana | logger=migrator t=2025-06-16T18:32:45.028271625Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-16T18:32:45.028343785Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=72.39µs grafana | logger=migrator t=2025-06-16T18:32:45.031040807Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-16T18:32:45.032374488Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.332861ms grafana | logger=migrator t=2025-06-16T18:32:45.03880831Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-16T18:32:45.039866479Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.070748ms grafana | logger=migrator t=2025-06-16T18:32:45.044363545Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-16T18:32:45.045350213Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=986.108µs grafana | logger=migrator t=2025-06-16T18:32:45.048984032Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-16T18:32:45.050517314Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.532822ms grafana | logger=migrator t=2025-06-16T18:32:45.055066742Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-16T18:32:45.056520113Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.453181ms grafana | logger=migrator t=2025-06-16T18:32:45.060163932Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-16T18:32:45.064296766Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.132514ms grafana | logger=migrator t=2025-06-16T18:32:45.06848909Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-16T18:32:45.072352921Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.863231ms grafana | logger=migrator t=2025-06-16T18:32:45.07714368Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-16T18:32:45.077633534Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=489.334µs grafana | logger=migrator t=2025-06-16T18:32:45.081608397Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-16T18:32:45.08330851Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.698663ms grafana | logger=migrator t=2025-06-16T18:32:45.087921397Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-16T18:32:45.088812435Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=890.438µs grafana | logger=migrator t=2025-06-16T18:32:45.094006117Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-16T18:32:45.097985989Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.979192ms grafana | logger=migrator t=2025-06-16T18:32:45.101442707Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-16T18:32:45.101587418Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=145.691µs grafana | logger=migrator t=2025-06-16T18:32:45.105161788Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-16T18:32:45.10676538Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.603263ms grafana | logger=migrator t=2025-06-16T18:32:45.112398036Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-16T18:32:45.113748397Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.350451ms grafana | logger=migrator t=2025-06-16T18:32:45.119690605Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-16T18:32:45.119851376Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=159.951µs grafana | logger=migrator t=2025-06-16T18:32:45.123492936Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-16T18:32:45.125126069Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.632543ms grafana | logger=migrator t=2025-06-16T18:32:45.13026327Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-16T18:32:45.131544541Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.282021ms grafana | logger=migrator t=2025-06-16T18:32:45.135182301Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-16T18:32:45.136258479Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.075569ms grafana | logger=migrator t=2025-06-16T18:32:45.139662157Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-16T18:32:45.140558704Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=896.207µs grafana | logger=migrator t=2025-06-16T18:32:45.144603147Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-16T18:32:45.14623501Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.630233ms grafana | logger=migrator t=2025-06-16T18:32:45.150080051Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-16T18:32:45.150949358Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=868.857µs grafana | logger=migrator t=2025-06-16T18:32:45.154484147Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-16T18:32:45.154506447Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=22.55µs grafana | logger=migrator t=2025-06-16T18:32:45.158870012Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-16T18:32:45.165728928Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.858436ms grafana | logger=migrator t=2025-06-16T18:32:45.211030235Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-16T18:32:45.212267305Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.23705ms grafana | logger=migrator t=2025-06-16T18:32:45.216160087Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-16T18:32:45.223351215Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=7.192108ms grafana | logger=migrator t=2025-06-16T18:32:45.227877321Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-16T18:32:45.228456656Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=578.855µs grafana | logger=migrator t=2025-06-16T18:32:45.231798003Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-16T18:32:45.233465016Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.666513ms grafana | logger=migrator t=2025-06-16T18:32:45.238599488Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-16T18:32:45.240234191Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.633103ms grafana | logger=migrator t=2025-06-16T18:32:45.245401123Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-16T18:32:45.259518888Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.113574ms grafana | logger=migrator t=2025-06-16T18:32:45.263297837Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-16T18:32:45.263858253Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=559.916µs grafana | logger=migrator t=2025-06-16T18:32:45.269483608Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-16T18:32:45.270462856Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=978.918µs grafana | logger=migrator t=2025-06-16T18:32:45.274763481Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-16T18:32:45.275312985Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=548.564µs grafana | logger=migrator t=2025-06-16T18:32:45.278945715Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-16T18:32:45.279532059Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=585.864µs grafana | logger=migrator t=2025-06-16T18:32:45.283100209Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-16T18:32:45.283383621Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=282.652µs grafana | logger=migrator t=2025-06-16T18:32:45.287516244Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-16T18:32:45.296405136Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=8.894872ms grafana | logger=migrator t=2025-06-16T18:32:45.302036272Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-16T18:32:45.306324356Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.287224ms grafana | logger=migrator t=2025-06-16T18:32:45.310332049Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-16T18:32:45.311355008Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.019209ms grafana | logger=migrator t=2025-06-16T18:32:45.314770155Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-16T18:32:45.315756533Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=986.048µs grafana | logger=migrator t=2025-06-16T18:32:45.320541792Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-16T18:32:45.320981166Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=438.725µs grafana | logger=migrator t=2025-06-16T18:32:45.326240518Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-16T18:32:45.331321409Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=5.080801ms grafana | logger=migrator t=2025-06-16T18:32:45.334583505Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-16T18:32:45.335544283Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=959.898µs grafana | logger=migrator t=2025-06-16T18:32:45.340243231Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-16T18:32:45.340480453Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=236.272µs grafana | logger=migrator t=2025-06-16T18:32:45.343889011Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-16T18:32:45.344520395Z level=info msg="Migration successfully executed" id="Move region to single row" duration=629.254µs grafana | logger=migrator t=2025-06-16T18:32:45.348438967Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-16T18:32:45.35001859Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.584023ms grafana | logger=migrator t=2025-06-16T18:32:45.35363448Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-16T18:32:45.35491421Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.27953ms grafana | logger=migrator t=2025-06-16T18:32:45.35861369Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-16T18:32:45.360141403Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.527143ms grafana | logger=migrator t=2025-06-16T18:32:45.365300214Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-16T18:32:45.366142071Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=845.907µs grafana | logger=migrator t=2025-06-16T18:32:45.368928854Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-16T18:32:45.370083133Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.152949ms grafana | logger=migrator t=2025-06-16T18:32:45.37339931Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-16T18:32:45.374748861Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.349081ms grafana | logger=migrator t=2025-06-16T18:32:45.381547176Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-16T18:32:45.381563476Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=19.31µs grafana | logger=migrator t=2025-06-16T18:32:45.384750702Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-16T18:32:45.384770982Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=18.75µs grafana | logger=migrator t=2025-06-16T18:32:45.387739316Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-16T18:32:45.387756896Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=18.3µs grafana | logger=migrator t=2025-06-16T18:32:45.392638946Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-16T18:32:45.394053387Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.413331ms grafana | logger=migrator t=2025-06-16T18:32:45.397710636Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-16T18:32:45.398547984Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=836.938µs grafana | logger=migrator t=2025-06-16T18:32:45.403649805Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-16T18:32:45.404631753Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=981.678µs grafana | logger=migrator t=2025-06-16T18:32:45.410212348Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-16T18:32:45.411114545Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=901.877µs grafana | logger=migrator t=2025-06-16T18:32:45.414708285Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-16T18:32:45.414887296Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=179.021µs grafana | logger=migrator t=2025-06-16T18:32:45.418294674Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-16T18:32:45.418654356Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=353.362µs grafana | logger=migrator t=2025-06-16T18:32:45.421794492Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-16T18:32:45.421817773Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=24.351µs grafana | logger=migrator t=2025-06-16T18:32:45.427219526Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-16T18:32:45.435754945Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=8.534659ms grafana | logger=migrator t=2025-06-16T18:32:45.439215723Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-16T18:32:45.439757247Z level=info msg="Migration successfully executed" id="create team table" duration=540.984µs grafana | logger=migrator t=2025-06-16T18:32:45.442844552Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-16T18:32:45.443470687Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=626.015µs grafana | logger=migrator t=2025-06-16T18:32:45.44754163Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-16T18:32:45.449016142Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.473202ms grafana | logger=migrator t=2025-06-16T18:32:45.452630362Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-16T18:32:45.457487741Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.856729ms grafana | logger=migrator t=2025-06-16T18:32:45.461152701Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-16T18:32:45.461416033Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=263.972µs grafana | logger=migrator t=2025-06-16T18:32:45.465979039Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-16T18:32:45.466968888Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=990.129µs grafana | logger=migrator t=2025-06-16T18:32:45.470494736Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-16T18:32:45.475124084Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=4.628878ms grafana | logger=migrator t=2025-06-16T18:32:45.47954226Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-16T18:32:45.48454254Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.99728ms grafana | logger=migrator t=2025-06-16T18:32:45.489173458Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-16T18:32:45.490466178Z level=info msg="Migration successfully executed" id="create team member table" duration=1.27107ms grafana | logger=migrator t=2025-06-16T18:32:45.494077907Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-16T18:32:45.495119146Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.040579ms grafana | logger=migrator t=2025-06-16T18:32:45.498415432Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-16T18:32:45.499511131Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.095059ms grafana | logger=migrator t=2025-06-16T18:32:45.503849956Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-16T18:32:45.504964146Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.11344ms grafana | logger=migrator t=2025-06-16T18:32:45.509396061Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-16T18:32:45.517421226Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=8.023465ms grafana | logger=migrator t=2025-06-16T18:32:45.52162596Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-16T18:32:45.528642417Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=7.017417ms grafana | logger=migrator t=2025-06-16T18:32:45.534110772Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-16T18:32:45.538409476Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.297214ms grafana | logger=migrator t=2025-06-16T18:32:45.541863304Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-16T18:32:45.543081854Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=1.21816ms grafana | logger=migrator t=2025-06-16T18:32:45.546390951Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-16T18:32:45.547479189Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.088488ms grafana | logger=migrator t=2025-06-16T18:32:45.583930105Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-16T18:32:45.586139323Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=2.211648ms grafana | logger=migrator t=2025-06-16T18:32:45.590527729Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-16T18:32:45.591185994Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=657.655µs grafana | logger=migrator t=2025-06-16T18:32:45.594535951Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-16T18:32:45.595406439Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=869.248µs grafana | logger=migrator t=2025-06-16T18:32:45.598511694Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-16T18:32:45.599609603Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.096889ms grafana | logger=migrator t=2025-06-16T18:32:45.603736266Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-16T18:32:45.604658313Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=921.077µs grafana | logger=migrator t=2025-06-16T18:32:45.609734564Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-16T18:32:45.610627332Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=892.288µs grafana | logger=migrator t=2025-06-16T18:32:45.614305721Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-16T18:32:45.615999164Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.692633ms grafana | logger=migrator t=2025-06-16T18:32:45.620944045Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-16T18:32:45.621543959Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=600.504µs grafana | logger=migrator t=2025-06-16T18:32:45.62523086Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-16T18:32:45.625645803Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=416.773µs grafana | logger=migrator t=2025-06-16T18:32:45.630292711Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-16T18:32:45.631314629Z level=info msg="Migration successfully executed" id="create tag table" duration=1.021497ms grafana | logger=migrator t=2025-06-16T18:32:45.635021639Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-16T18:32:45.636021777Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.000248ms grafana | logger=migrator t=2025-06-16T18:32:45.639556635Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-16T18:32:45.640362472Z level=info msg="Migration successfully executed" id="create login attempt table" duration=805.887µs grafana | logger=migrator t=2025-06-16T18:32:45.644664847Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-16T18:32:45.645635215Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=970.218µs grafana | logger=migrator t=2025-06-16T18:32:45.649120463Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-16T18:32:45.650032961Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=912.048µs grafana | logger=migrator t=2025-06-16T18:32:45.653893791Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T18:32:45.667767704Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=13.874173ms grafana | logger=migrator t=2025-06-16T18:32:45.672237601Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-16T18:32:45.673071217Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=833.626µs grafana | logger=migrator t=2025-06-16T18:32:45.676514426Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-16T18:32:45.677827036Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.31249ms grafana | logger=migrator t=2025-06-16T18:32:45.681308764Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-16T18:32:45.681656077Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=346.883µs grafana | logger=migrator t=2025-06-16T18:32:45.68705236Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-16T18:32:45.687810907Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=755.517µs grafana | logger=migrator t=2025-06-16T18:32:45.691116694Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-16T18:32:45.692114982Z level=info msg="Migration successfully executed" id="create user auth table" duration=997.679µs grafana | logger=migrator t=2025-06-16T18:32:45.69571286Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-16T18:32:45.696733899Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.021599ms grafana | logger=migrator t=2025-06-16T18:32:45.701041683Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-16T18:32:45.701058174Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=18.78µs grafana | logger=migrator t=2025-06-16T18:32:45.706080925Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-16T18:32:45.713540975Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.45823ms grafana | logger=migrator t=2025-06-16T18:32:45.717368426Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-16T18:32:45.72272135Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.352404ms grafana | logger=migrator t=2025-06-16T18:32:45.727123725Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-16T18:32:45.732986372Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.861887ms grafana | logger=migrator t=2025-06-16T18:32:45.73765238Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-16T18:32:45.743456577Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.804037ms grafana | logger=migrator t=2025-06-16T18:32:45.747057926Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-16T18:32:45.747996574Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=938.448µs grafana | logger=migrator t=2025-06-16T18:32:45.752227919Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-16T18:32:45.756104479Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.87582ms grafana | logger=migrator t=2025-06-16T18:32:45.760877129Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-16T18:32:45.767042818Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=6.164979ms grafana | logger=migrator t=2025-06-16T18:32:45.770488187Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-16T18:32:45.771246682Z level=info msg="Migration successfully executed" id="create server_lock table" duration=760.915µs grafana | logger=migrator t=2025-06-16T18:32:45.775817259Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-16T18:32:45.776835888Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.018209ms grafana | logger=migrator t=2025-06-16T18:32:45.780434007Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-16T18:32:45.78205485Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.627713ms grafana | logger=migrator t=2025-06-16T18:32:45.786270474Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-16T18:32:45.788520173Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.250579ms grafana | logger=migrator t=2025-06-16T18:32:45.79322892Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-16T18:32:45.794300739Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.071209ms grafana | logger=migrator t=2025-06-16T18:32:45.798286902Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-16T18:32:45.7993564Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.069818ms grafana | logger=migrator t=2025-06-16T18:32:45.802926479Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-16T18:32:45.810046077Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=7.119178ms grafana | logger=migrator t=2025-06-16T18:32:45.814584303Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-16T18:32:45.815527971Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=943.668µs grafana | logger=migrator t=2025-06-16T18:32:45.818856888Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-16T18:32:45.824360932Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=5.502534ms grafana | logger=migrator t=2025-06-16T18:32:45.82766201Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-16T18:32:45.828668847Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.006877ms grafana | logger=migrator t=2025-06-16T18:32:45.832995122Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-16T18:32:45.834080611Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.085429ms grafana | logger=migrator t=2025-06-16T18:32:45.83760232Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-16T18:32:45.838518867Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=911.367µs grafana | logger=migrator t=2025-06-16T18:32:45.842081066Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-16T18:32:45.843399777Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.357111ms grafana | logger=migrator t=2025-06-16T18:32:45.847822863Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-16T18:32:45.847841373Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=19.58µs grafana | logger=migrator t=2025-06-16T18:32:45.851340201Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-16T18:32:45.851446443Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=81.34µs grafana | logger=migrator t=2025-06-16T18:32:45.85491717Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-16T18:32:45.855855918Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=938.218µs grafana | logger=migrator t=2025-06-16T18:32:45.862534771Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-16T18:32:45.86357616Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.040989ms grafana | logger=migrator t=2025-06-16T18:32:45.869267096Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-16T18:32:45.87095945Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.691474ms grafana | logger=migrator t=2025-06-16T18:32:45.874554939Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-16T18:32:45.874576519Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=22.37µs grafana | logger=migrator t=2025-06-16T18:32:45.87828461Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-16T18:32:45.879380608Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.090368ms grafana | logger=migrator t=2025-06-16T18:32:45.883474701Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-16T18:32:45.884874762Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.398631ms grafana | logger=migrator t=2025-06-16T18:32:45.889185297Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-16T18:32:45.890955232Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.770215ms grafana | logger=migrator t=2025-06-16T18:32:45.896720409Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-16T18:32:45.898271621Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.551432ms grafana | logger=migrator t=2025-06-16T18:32:45.901849511Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-16T18:32:45.90799051Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.140199ms grafana | logger=migrator t=2025-06-16T18:32:45.932872132Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-16T18:32:45.93513001Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=2.256528ms grafana | logger=migrator t=2025-06-16T18:32:45.940993857Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-16T18:32:45.94130287Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=311.173µs grafana | logger=migrator t=2025-06-16T18:32:45.944636277Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-16T18:32:45.94621903Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.581912ms grafana | logger=migrator t=2025-06-16T18:32:45.951244771Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-16T18:32:45.95232756Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.082569ms grafana | logger=migrator t=2025-06-16T18:32:45.956519354Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-16T18:32:45.957912845Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.392371ms grafana | logger=migrator t=2025-06-16T18:32:45.961812476Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-16T18:32:45.961843566Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=83.061µs grafana | logger=migrator t=2025-06-16T18:32:45.967486572Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-16T18:32:45.96853253Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.045228ms grafana | logger=migrator t=2025-06-16T18:32:45.971503674Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-16T18:32:45.972624363Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.120159ms grafana | logger=migrator t=2025-06-16T18:32:45.975542967Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-16T18:32:45.97721923Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.671283ms grafana | logger=migrator t=2025-06-16T18:32:45.983631513Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-16T18:32:45.984678331Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.048888ms grafana | logger=migrator t=2025-06-16T18:32:45.987912447Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-16T18:32:45.996631698Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=8.717661ms grafana | logger=migrator t=2025-06-16T18:32:45.999751853Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-16T18:32:46.000418079Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=666.016µs grafana | logger=migrator t=2025-06-16T18:32:46.006680219Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-16T18:32:46.008123471Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.443112ms grafana | logger=migrator t=2025-06-16T18:32:46.011501219Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-16T18:32:46.036880934Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=25.376295ms grafana | logger=migrator t=2025-06-16T18:32:46.041793973Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-16T18:32:46.071769846Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=29.985013ms grafana | logger=migrator t=2025-06-16T18:32:46.077631043Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-16T18:32:46.07846371Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=832.977µs grafana | logger=migrator t=2025-06-16T18:32:46.084361958Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-16T18:32:46.086099702Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.737084ms grafana | logger=migrator t=2025-06-16T18:32:46.095790121Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-16T18:32:46.104609702Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.78548ms grafana | logger=migrator t=2025-06-16T18:32:46.110139797Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-16T18:32:46.115188328Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.047481ms grafana | logger=migrator t=2025-06-16T18:32:46.118536564Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-16T18:32:46.119619843Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.083039ms grafana | logger=migrator t=2025-06-16T18:32:46.1254579Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-16T18:32:46.1266039Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.14616ms grafana | logger=migrator t=2025-06-16T18:32:46.132943872Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-16T18:32:46.134641615Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.696873ms grafana | logger=migrator t=2025-06-16T18:32:46.144700737Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-16T18:32:46.146312439Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.610032ms grafana | logger=migrator t=2025-06-16T18:32:46.150412063Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-16T18:32:46.150531434Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=120.161µs grafana | logger=migrator t=2025-06-16T18:32:46.154287884Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-16T18:32:46.160539244Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.25049ms grafana | logger=migrator t=2025-06-16T18:32:46.164757618Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-16T18:32:46.171416713Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.650694ms grafana | logger=migrator t=2025-06-16T18:32:46.177866605Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-16T18:32:46.184533758Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.666283ms grafana | logger=migrator t=2025-06-16T18:32:46.187840896Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-16T18:32:46.188621452Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=780.186µs grafana | logger=migrator t=2025-06-16T18:32:46.191904219Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-16T18:32:46.192867476Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=962.877µs grafana | logger=migrator t=2025-06-16T18:32:46.19958475Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-16T18:32:46.209221638Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.636348ms grafana | logger=migrator t=2025-06-16T18:32:46.212333463Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-16T18:32:46.216768999Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.434856ms grafana | logger=migrator t=2025-06-16T18:32:46.224905475Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-16T18:32:46.227071613Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=2.164988ms grafana | logger=migrator t=2025-06-16T18:32:46.230962385Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-16T18:32:46.239677795Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=8.71575ms grafana | logger=migrator t=2025-06-16T18:32:46.244574924Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-16T18:32:46.250901506Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.325942ms grafana | logger=migrator t=2025-06-16T18:32:46.285507726Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-16T18:32:46.285687608Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=180.412µs grafana | logger=migrator t=2025-06-16T18:32:46.291825617Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-16T18:32:46.293641591Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.820914ms grafana | logger=migrator t=2025-06-16T18:32:46.298447761Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-16T18:32:46.300200745Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.752234ms grafana | logger=migrator t=2025-06-16T18:32:46.30701915Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-16T18:32:46.308268531Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.239551ms grafana | logger=migrator t=2025-06-16T18:32:46.314560521Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-16T18:32:46.314667362Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=107.871µs grafana | logger=migrator t=2025-06-16T18:32:46.319572572Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-16T18:32:46.326650889Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.078517ms grafana | logger=migrator t=2025-06-16T18:32:46.3341789Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-16T18:32:46.340772743Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.593193ms grafana | logger=migrator t=2025-06-16T18:32:46.348021922Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-16T18:32:46.355302671Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=7.274209ms grafana | logger=migrator t=2025-06-16T18:32:46.36003297Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-16T18:32:46.36641426Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.3806ms grafana | logger=migrator t=2025-06-16T18:32:46.369655287Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-16T18:32:46.376315031Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.659114ms grafana | logger=migrator t=2025-06-16T18:32:46.381946936Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-16T18:32:46.382039707Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=92.541µs grafana | logger=migrator t=2025-06-16T18:32:46.388931653Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-16T18:32:46.390572557Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.640004ms grafana | logger=migrator t=2025-06-16T18:32:46.395636877Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-16T18:32:46.404683971Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.047274ms grafana | logger=migrator t=2025-06-16T18:32:46.409172497Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-16T18:32:46.409328998Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=153.541µs grafana | logger=migrator t=2025-06-16T18:32:46.416144553Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-16T18:32:46.426126604Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=9.986951ms grafana | logger=migrator t=2025-06-16T18:32:46.429876515Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-16T18:32:46.430684221Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=807.446µs grafana | logger=migrator t=2025-06-16T18:32:46.43538457Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-16T18:32:46.441810531Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.425581ms grafana | logger=migrator t=2025-06-16T18:32:46.446121326Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-16T18:32:46.447072474Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=950.098µs grafana | logger=migrator t=2025-06-16T18:32:46.454266002Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-16T18:32:46.456664571Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=2.401159ms grafana | logger=migrator t=2025-06-16T18:32:46.461223818Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-16T18:32:46.468077334Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.852816ms grafana | logger=migrator t=2025-06-16T18:32:46.478061465Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-16T18:32:46.479652117Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.589742ms grafana | logger=migrator t=2025-06-16T18:32:46.486318142Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-16T18:32:46.487482191Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.163789ms grafana | logger=migrator t=2025-06-16T18:32:46.493137597Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-16T18:32:46.494658649Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.520312ms grafana | logger=migrator t=2025-06-16T18:32:46.500302615Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-16T18:32:46.501441854Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.138849ms grafana | logger=migrator t=2025-06-16T18:32:46.506453604Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-16T18:32:46.506669786Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=217.622µs grafana | logger=migrator t=2025-06-16T18:32:46.51208435Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-16T18:32:46.513646342Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.564732ms grafana | logger=migrator t=2025-06-16T18:32:46.518063989Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-16T18:32:46.519223847Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.159268ms grafana | logger=migrator t=2025-06-16T18:32:46.522575825Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-16T18:32:46.523082779Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-16T18:32:46.527812888Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-16T18:32:46.528653744Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=846.266µs grafana | logger=migrator t=2025-06-16T18:32:46.537837988Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-16T18:32:46.539704713Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.865435ms grafana | logger=migrator t=2025-06-16T18:32:46.543856467Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-16T18:32:46.551957963Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.102066ms grafana | logger=migrator t=2025-06-16T18:32:46.555907555Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-16T18:32:46.556725231Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=817.366µs grafana | logger=migrator t=2025-06-16T18:32:46.561919703Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-16T18:32:46.56408448Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=2.162187ms grafana | logger=migrator t=2025-06-16T18:32:46.568000762Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-16T18:32:46.569847998Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.846266ms grafana | logger=migrator t=2025-06-16T18:32:46.573492767Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-16T18:32:46.574665666Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.172319ms grafana | logger=migrator t=2025-06-16T18:32:46.580531304Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-16T18:32:46.581664553Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.132509ms grafana | logger=migrator t=2025-06-16T18:32:46.591283321Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-16T18:32:46.591479262Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=195.121µs grafana | logger=migrator t=2025-06-16T18:32:46.596933747Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-16T18:32:46.597084568Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=151.301µs grafana | logger=migrator t=2025-06-16T18:32:46.647111823Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-16T18:32:46.655594301Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=8.479808ms grafana | logger=migrator t=2025-06-16T18:32:46.659894266Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-16T18:32:46.660254719Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=357.553µs grafana | logger=migrator t=2025-06-16T18:32:46.664994758Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-16T18:32:46.665917495Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=922.017µs grafana | logger=migrator t=2025-06-16T18:32:46.674848908Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-16T18:32:46.675476992Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=627.004µs grafana | logger=migrator t=2025-06-16T18:32:46.682467579Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-16T18:32:46.684174763Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.707484ms grafana | logger=migrator t=2025-06-16T18:32:46.691180369Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-16T18:32:46.692650131Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.469472ms grafana | logger=migrator t=2025-06-16T18:32:46.700294033Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-16T18:32:46.740293086Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=39.996023ms grafana | logger=migrator t=2025-06-16T18:32:46.749880584Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-16T18:32:46.758104441Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=8.223357ms grafana | logger=migrator t=2025-06-16T18:32:46.762907161Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-16T18:32:46.763165742Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=257.711µs grafana | logger=migrator t=2025-06-16T18:32:46.766789691Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-16T18:32:46.800838057Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=34.045936ms grafana | logger=migrator t=2025-06-16T18:32:46.813305929Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-16T18:32:46.84691049Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=33.642403ms grafana | logger=migrator t=2025-06-16T18:32:46.854961185Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-16T18:32:46.856619728Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.661903ms grafana | logger=migrator t=2025-06-16T18:32:46.865960564Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-16T18:32:46.867485486Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.523802ms grafana | logger=migrator t=2025-06-16T18:32:46.888527697Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-16T18:32:46.888871139Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=343.232µs grafana | logger=migrator t=2025-06-16T18:32:46.89750476Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-16T18:32:46.899012132Z level=info msg="Migration successfully executed" id="create permission table" duration=1.507522ms grafana | logger=migrator t=2025-06-16T18:32:46.907083946Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-16T18:32:46.90869698Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.613884ms grafana | logger=migrator t=2025-06-16T18:32:46.91247939Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-16T18:32:46.914122164Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.642724ms grafana | logger=migrator t=2025-06-16T18:32:46.922181409Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-16T18:32:46.92357315Z level=info msg="Migration successfully executed" id="create role table" duration=1.391421ms grafana | logger=migrator t=2025-06-16T18:32:46.929051275Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-16T18:32:46.93703226Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.979055ms grafana | logger=migrator t=2025-06-16T18:32:46.942840916Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-16T18:32:46.949203808Z level=info msg="Migration successfully executed" id="add column group_name" duration=6.363962ms grafana | logger=migrator t=2025-06-16T18:32:46.986153107Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-16T18:32:46.988595107Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=2.44549ms grafana | logger=migrator t=2025-06-16T18:32:46.995893596Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-16T18:32:46.996936944Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.043348ms grafana | logger=migrator t=2025-06-16T18:32:46.999907558Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-16T18:32:47.001894074Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.980156ms grafana | logger=migrator t=2025-06-16T18:32:47.015368004Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-16T18:32:47.017032297Z level=info msg="Migration successfully executed" id="create team role table" duration=1.646783ms grafana | logger=migrator t=2025-06-16T18:32:47.023505649Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-16T18:32:47.024690418Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.184799ms grafana | logger=migrator t=2025-06-16T18:32:47.028022716Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-16T18:32:47.029108275Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.084759ms grafana | logger=migrator t=2025-06-16T18:32:47.036819507Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-16T18:32:47.038023637Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.21059ms grafana | logger=migrator t=2025-06-16T18:32:47.047517154Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-16T18:32:47.049160187Z level=info msg="Migration successfully executed" id="create user role table" duration=1.642893ms grafana | logger=migrator t=2025-06-16T18:32:47.058084728Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-16T18:32:47.059148967Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.063809ms grafana | logger=migrator t=2025-06-16T18:32:47.074062418Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-16T18:32:47.075821692Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.758514ms grafana | logger=migrator t=2025-06-16T18:32:47.082027212Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-16T18:32:47.083116571Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.086039ms grafana | logger=migrator t=2025-06-16T18:32:47.09044229Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-16T18:32:47.091816181Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.372981ms grafana | logger=migrator t=2025-06-16T18:32:47.103336615Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-16T18:32:47.10526845Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.930865ms grafana | logger=migrator t=2025-06-16T18:32:47.109944388Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-16T18:32:47.111261808Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.31792ms grafana | logger=migrator t=2025-06-16T18:32:47.116144219Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-16T18:32:47.127734392Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=11.590803ms grafana | logger=migrator t=2025-06-16T18:32:47.137850503Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-16T18:32:47.13978188Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.934687ms grafana | logger=migrator t=2025-06-16T18:32:47.14363052Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-16T18:32:47.14481313Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.18175ms grafana | logger=migrator t=2025-06-16T18:32:47.151153412Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-16T18:32:47.153318048Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=2.163986ms grafana | logger=migrator t=2025-06-16T18:32:47.159365588Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-16T18:32:47.160442926Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.077238ms grafana | logger=migrator t=2025-06-16T18:32:47.165248505Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-16T18:32:47.166094192Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=844.637µs grafana | logger=migrator t=2025-06-16T18:32:47.171438425Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-16T18:32:47.172483194Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.044319ms grafana | logger=migrator t=2025-06-16T18:32:47.184029757Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-16T18:32:47.195585281Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=11.554154ms grafana | logger=migrator t=2025-06-16T18:32:47.198529225Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-16T18:32:47.205235649Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.705344ms grafana | logger=migrator t=2025-06-16T18:32:47.211380969Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-16T18:32:47.219449664Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.064286ms grafana | logger=migrator t=2025-06-16T18:32:47.225702604Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-16T18:32:47.240485714Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=14.78267ms grafana | logger=migrator t=2025-06-16T18:32:47.245473214Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-16T18:32:47.246397122Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=923.868µs grafana | logger=migrator t=2025-06-16T18:32:47.250569446Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-16T18:32:47.251903027Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.332811ms grafana | logger=migrator t=2025-06-16T18:32:47.263113267Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-16T18:32:47.264831471Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.716944ms grafana | logger=migrator t=2025-06-16T18:32:47.271489405Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-16T18:32:47.280384707Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=8.895022ms grafana | logger=migrator t=2025-06-16T18:32:47.284314909Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-16T18:32:47.285500628Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.181759ms grafana | logger=migrator t=2025-06-16T18:32:47.293884586Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-16T18:32:47.295949452Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=2.068206ms grafana | logger=migrator t=2025-06-16T18:32:47.301799209Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-16T18:32:47.303375663Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.579634ms grafana | logger=migrator t=2025-06-16T18:32:47.366288791Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-16T18:32:47.368505389Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.218328ms grafana | logger=migrator t=2025-06-16T18:32:47.378023896Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-16T18:32:47.378053157Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=30.63µs grafana | logger=migrator t=2025-06-16T18:32:47.383971354Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-16T18:32:47.385072203Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.129319ms grafana | logger=migrator t=2025-06-16T18:32:47.389740591Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-16T18:32:47.389826492Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=86.601µs grafana | logger=migrator t=2025-06-16T18:32:47.398961566Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-16T18:32:47.399685811Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=723.665µs grafana | logger=migrator t=2025-06-16T18:32:47.404309749Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-16T18:32:47.405253766Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=944.977µs grafana | logger=migrator t=2025-06-16T18:32:47.410549389Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-16T18:32:47.411332246Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=782.727µs grafana | logger=migrator t=2025-06-16T18:32:47.415672291Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-16T18:32:47.415874763Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=202.392µs grafana | logger=migrator t=2025-06-16T18:32:47.424981835Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-16T18:32:47.425722552Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=740.547µs grafana | logger=migrator t=2025-06-16T18:32:47.429553873Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-16T18:32:47.430918594Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.364221ms grafana | logger=migrator t=2025-06-16T18:32:47.437219815Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-16T18:32:47.438618086Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.397791ms grafana | logger=migrator t=2025-06-16T18:32:47.451984064Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-16T18:32:47.461294849Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.307845ms grafana | logger=migrator t=2025-06-16T18:32:47.468182615Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-16T18:32:47.468196995Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=14.87µs grafana | logger=migrator t=2025-06-16T18:32:47.472656031Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-16T18:32:47.474158853Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.494092ms grafana | logger=migrator t=2025-06-16T18:32:47.480264163Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-16T18:32:47.481571564Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.306641ms grafana | logger=migrator t=2025-06-16T18:32:47.488905363Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-16T18:32:47.490020572Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.114749ms grafana | logger=migrator t=2025-06-16T18:32:47.496414073Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-16T18:32:47.507487243Z level=info msg="Migration successfully executed" id="add correlation config column" duration=11.07295ms grafana | logger=migrator t=2025-06-16T18:32:47.511258604Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-16T18:32:47.512516604Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.25856ms grafana | logger=migrator t=2025-06-16T18:32:47.51944731Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-16T18:32:47.521182474Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.808134ms grafana | logger=migrator t=2025-06-16T18:32:47.527512916Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T18:32:47.548402895Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=20.896659ms grafana | logger=migrator t=2025-06-16T18:32:47.551489569Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-16T18:32:47.552436406Z level=info msg="Migration successfully executed" id="create correlation v2" duration=946.647µs grafana | logger=migrator t=2025-06-16T18:32:47.559679176Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-16T18:32:47.561705352Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=2.025345ms grafana | logger=migrator t=2025-06-16T18:32:47.565233841Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-16T18:32:47.566986655Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.752414ms grafana | logger=migrator t=2025-06-16T18:32:47.570756594Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-16T18:32:47.571828093Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.064669ms grafana | logger=migrator t=2025-06-16T18:32:47.578005754Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-16T18:32:47.578578308Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=572.094µs grafana | logger=migrator t=2025-06-16T18:32:47.582650411Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-16T18:32:47.583923622Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.26506ms grafana | logger=migrator t=2025-06-16T18:32:47.591113709Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-16T18:32:47.599386997Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.272868ms grafana | logger=migrator t=2025-06-16T18:32:47.608340128Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-16T18:32:47.621382915Z level=info msg="Migration successfully executed" id="add type column" duration=13.043607ms grafana | logger=migrator t=2025-06-16T18:32:47.625659149Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-16T18:32:47.626387235Z level=info msg="Migration successfully executed" id="create entity_events table" duration=732.756µs grafana | logger=migrator t=2025-06-16T18:32:47.631026103Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-16T18:32:47.63201599Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=989.767µs grafana | logger=migrator t=2025-06-16T18:32:47.636576428Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-16T18:32:47.637014021Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-16T18:32:47.640388538Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-16T18:32:47.640812122Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-16T18:32:47.64430176Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-16T18:32:47.645027086Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=725.076µs grafana | logger=migrator t=2025-06-16T18:32:47.652219424Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-16T18:32:47.653424104Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.20436ms grafana | logger=migrator t=2025-06-16T18:32:47.656835111Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-16T18:32:47.658837547Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.002286ms grafana | logger=migrator t=2025-06-16T18:32:47.662480637Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-16T18:32:47.663658206Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.177319ms grafana | logger=migrator t=2025-06-16T18:32:47.668866819Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-16T18:32:47.670431071Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.570682ms grafana | logger=migrator t=2025-06-16T18:32:47.673884499Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-16T18:32:47.675610902Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.726253ms grafana | logger=migrator t=2025-06-16T18:32:47.679861757Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-16T18:32:47.680854285Z level=info msg="Migration successfully executed" id="Drop public config table" duration=992.198µs grafana | logger=migrator t=2025-06-16T18:32:47.684654786Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-16T18:32:47.685997927Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.342361ms grafana | logger=migrator t=2025-06-16T18:32:47.690255421Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-16T18:32:47.691432211Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.17643ms grafana | logger=migrator t=2025-06-16T18:32:47.727890846Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-16T18:32:47.730772409Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.880073ms grafana | logger=migrator t=2025-06-16T18:32:47.735700108Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-16T18:32:47.736866828Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.16632ms grafana | logger=migrator t=2025-06-16T18:32:47.743860125Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-16T18:32:47.765975984Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=22.115749ms grafana | logger=migrator t=2025-06-16T18:32:47.769184579Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-16T18:32:47.775992675Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.807656ms grafana | logger=migrator t=2025-06-16T18:32:47.780220429Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-16T18:32:47.788576266Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.355957ms grafana | logger=migrator t=2025-06-16T18:32:47.79397013Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-16T18:32:47.794817117Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=847.647µs grafana | logger=migrator t=2025-06-16T18:32:47.798535497Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-16T18:32:47.808809121Z level=info msg="Migration successfully executed" id="add share column" duration=10.273384ms grafana | logger=migrator t=2025-06-16T18:32:47.811833795Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-16T18:32:47.811963556Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=131.321µs grafana | logger=migrator t=2025-06-16T18:32:47.814213114Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-16T18:32:47.81494736Z level=info msg="Migration successfully executed" id="create file table" duration=734.126µs grafana | logger=migrator t=2025-06-16T18:32:47.818998803Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-16T18:32:47.820333803Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.33441ms grafana | logger=migrator t=2025-06-16T18:32:47.825004521Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-16T18:32:47.82731736Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=2.304079ms grafana | logger=migrator t=2025-06-16T18:32:47.830729087Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-16T18:32:47.831720866Z level=info msg="Migration successfully executed" id="create file_meta table" duration=992.389µs grafana | logger=migrator t=2025-06-16T18:32:47.836078541Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-16T18:32:47.83723898Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.160139ms grafana | logger=migrator t=2025-06-16T18:32:47.840591687Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-16T18:32:47.840611077Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=19.94µs grafana | logger=migrator t=2025-06-16T18:32:47.843732752Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-16T18:32:47.843752832Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=20.79µs grafana | logger=migrator t=2025-06-16T18:32:47.849297217Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-16T18:32:47.850493087Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.19583ms grafana | logger=migrator t=2025-06-16T18:32:47.856934609Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-16T18:32:47.857132441Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=197.632µs grafana | logger=migrator t=2025-06-16T18:32:47.860434047Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-16T18:32:47.861717308Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.282641ms grafana | logger=migrator t=2025-06-16T18:32:47.864976715Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-16T18:32:47.874560751Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.585096ms grafana | logger=migrator t=2025-06-16T18:32:47.880186637Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-16T18:32:47.880572301Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=386.044µs grafana | logger=migrator t=2025-06-16T18:32:47.884152869Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-16T18:32:47.886203246Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.049727ms grafana | logger=migrator t=2025-06-16T18:32:47.889614854Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-16T18:32:47.890038908Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=424.144µs grafana | logger=migrator t=2025-06-16T18:32:47.89409595Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-16T18:32:47.894297502Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=201.482µs grafana | logger=migrator t=2025-06-16T18:32:47.901321589Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-16T18:32:47.902048624Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=726.695µs grafana | logger=migrator t=2025-06-16T18:32:47.906921994Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-16T18:32:47.918631288Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.709464ms grafana | logger=migrator t=2025-06-16T18:32:47.922020935Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-16T18:32:47.930712246Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.693731ms grafana | logger=migrator t=2025-06-16T18:32:47.939411277Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-16T18:32:47.940550826Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.139369ms grafana | logger=migrator t=2025-06-16T18:32:47.943678211Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-16T18:32:48.019803026Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=76.125055ms grafana | logger=migrator t=2025-06-16T18:32:48.024774477Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-16T18:32:48.025941566Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.166639ms grafana | logger=migrator t=2025-06-16T18:32:48.037656321Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-16T18:32:48.039538076Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.881365ms grafana | logger=migrator t=2025-06-16T18:32:48.096470936Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-16T18:32:48.127373896Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=30.90363ms grafana | logger=migrator t=2025-06-16T18:32:48.132579698Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-16T18:32:48.140562492Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.982164ms grafana | logger=migrator t=2025-06-16T18:32:48.146339809Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-16T18:32:48.146700862Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=360.683µs grafana | logger=migrator t=2025-06-16T18:32:48.153707949Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-16T18:32:48.154520415Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=775.565µs grafana | logger=migrator t=2025-06-16T18:32:48.163122575Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-16T18:32:48.163711159Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=588.514µs grafana | logger=migrator t=2025-06-16T18:32:48.167807102Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-16T18:32:48.168148725Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=341.303µs grafana | logger=migrator t=2025-06-16T18:32:48.171420662Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-16T18:32:48.171874035Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=452.854µs grafana | logger=migrator t=2025-06-16T18:32:48.182838214Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-16T18:32:48.184190015Z level=info msg="Migration successfully executed" id="create folder table" duration=1.351151ms grafana | logger=migrator t=2025-06-16T18:32:48.189158475Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-16T18:32:48.191295543Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.137358ms grafana | logger=migrator t=2025-06-16T18:32:48.196374153Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-16T18:32:48.197492362Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.111129ms grafana | logger=migrator t=2025-06-16T18:32:48.205433347Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-16T18:32:48.205584138Z level=info msg="Migration successfully executed" id="Update folder title length" duration=93.8µs grafana | logger=migrator t=2025-06-16T18:32:48.214783582Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-16T18:32:48.217878178Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=3.083245ms grafana | logger=migrator t=2025-06-16T18:32:48.22321686Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-16T18:32:48.22452501Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.30816ms grafana | logger=migrator t=2025-06-16T18:32:48.22813044Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-16T18:32:48.22938944Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.25864ms grafana | logger=migrator t=2025-06-16T18:32:48.23555735Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-16T18:32:48.236455777Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=855.267µs grafana | logger=migrator t=2025-06-16T18:32:48.24915774Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-16T18:32:48.249618443Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=460.263µs grafana | logger=migrator t=2025-06-16T18:32:48.254165611Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-16T18:32:48.255813044Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.633833ms grafana | logger=migrator t=2025-06-16T18:32:48.26656371Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-16T18:32:48.267912241Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.347841ms grafana | logger=migrator t=2025-06-16T18:32:48.276680313Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-16T18:32:48.279165042Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=2.484799ms grafana | logger=migrator t=2025-06-16T18:32:48.285882596Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-16T18:32:48.287523749Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.640103ms grafana | logger=migrator t=2025-06-16T18:32:48.299073374Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-16T18:32:48.300716237Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.643743ms grafana | logger=migrator t=2025-06-16T18:32:48.310763158Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-16T18:32:48.31227192Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.508642ms grafana | logger=migrator t=2025-06-16T18:32:48.320650327Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-16T18:32:48.321784137Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.13322ms grafana | logger=migrator t=2025-06-16T18:32:48.329535559Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-16T18:32:48.331682927Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.147308ms grafana | logger=migrator t=2025-06-16T18:32:48.34445004Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-16T18:32:48.346749899Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.303799ms grafana | logger=migrator t=2025-06-16T18:32:48.359536372Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-16T18:32:48.361242405Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.705843ms grafana | logger=migrator t=2025-06-16T18:32:48.367747408Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-16T18:32:48.36914887Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.401282ms grafana | logger=migrator t=2025-06-16T18:32:48.376012065Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-16T18:32:48.377998751Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.986276ms grafana | logger=migrator t=2025-06-16T18:32:48.383229443Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-16T18:32:48.383896208Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=667.485µs grafana | logger=migrator t=2025-06-16T18:32:48.391326459Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-16T18:32:48.403204315Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.878435ms grafana | logger=migrator t=2025-06-16T18:32:48.407028966Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-16T18:32:48.407658Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=629.584µs grafana | logger=migrator t=2025-06-16T18:32:48.413844361Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-16T18:32:48.414046323Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=202.732µs grafana | logger=migrator t=2025-06-16T18:32:48.420047301Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-16T18:32:48.422056387Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.008216ms grafana | logger=migrator t=2025-06-16T18:32:48.426131399Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-16T18:32:48.42620286Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=72.451µs grafana | logger=migrator t=2025-06-16T18:32:48.534389565Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-16T18:32:48.536519632Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.132547ms grafana | logger=migrator t=2025-06-16T18:32:48.54367231Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-16T18:32:48.545595996Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.923286ms grafana | logger=migrator t=2025-06-16T18:32:48.549395046Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-16T18:32:48.551241781Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.846325ms grafana | logger=migrator t=2025-06-16T18:32:48.555629317Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-16T18:32:48.556683195Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.053779ms grafana | logger=migrator t=2025-06-16T18:32:48.562937636Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-16T18:32:48.564390457Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.453582ms grafana | logger=migrator t=2025-06-16T18:32:48.567950776Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-16T18:32:48.568321479Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=373.383µs grafana | logger=migrator t=2025-06-16T18:32:48.571826217Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-16T18:32:48.572517933Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=691.076µs grafana | logger=migrator t=2025-06-16T18:32:48.5783554Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-16T18:32:48.579806262Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.456172ms grafana | logger=migrator t=2025-06-16T18:32:48.587280603Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-16T18:32:48.588856215Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.539292ms grafana | logger=migrator t=2025-06-16T18:32:48.595025035Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-16T18:32:48.604677223Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.651618ms grafana | logger=migrator t=2025-06-16T18:32:48.608815566Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-16T18:32:48.61794602Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.137844ms grafana | logger=migrator t=2025-06-16T18:32:48.622588668Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-16T18:32:48.63158327Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=8.993682ms grafana | logger=migrator t=2025-06-16T18:32:48.63530563Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-16T18:32:48.64265032Z level=info msg="Migration successfully executed" id="add migration uid column" duration=7.34383ms grafana | logger=migrator t=2025-06-16T18:32:48.648941811Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-16T18:32:48.649118672Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=176.341µs grafana | logger=migrator t=2025-06-16T18:32:48.652760901Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-16T18:32:48.653938321Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.17721ms grafana | logger=migrator t=2025-06-16T18:32:48.657270078Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-16T18:32:48.666634064Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.363266ms grafana | logger=migrator t=2025-06-16T18:32:48.671991197Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-16T18:32:48.672124608Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=133.501µs grafana | logger=migrator t=2025-06-16T18:32:48.678697021Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-16T18:32:48.680896939Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=2.204237ms grafana | logger=migrator t=2025-06-16T18:32:48.684474598Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T18:32:48.713592983Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=29.122245ms grafana | logger=migrator t=2025-06-16T18:32:48.717941978Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-16T18:32:48.718893456Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=951.248µs grafana | logger=migrator t=2025-06-16T18:32:48.722643436Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-16T18:32:48.723805066Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.16183ms grafana | logger=migrator t=2025-06-16T18:32:48.741163086Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-16T18:32:48.74167809Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=514.975µs grafana | logger=migrator t=2025-06-16T18:32:48.745275859Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-16T18:32:48.747045904Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.767954ms grafana | logger=migrator t=2025-06-16T18:32:48.754720175Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T18:32:48.781007788Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=26.287323ms grafana | logger=migrator t=2025-06-16T18:32:48.78747834Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-16T18:32:48.788200316Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=721.626µs grafana | logger=migrator t=2025-06-16T18:32:48.791339731Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-16T18:32:48.793387988Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=2.047187ms grafana | logger=migrator t=2025-06-16T18:32:48.798198437Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-16T18:32:48.798525899Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=327.392µs grafana | logger=migrator t=2025-06-16T18:32:48.803027086Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-16T18:32:48.803853692Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=826.436µs grafana | logger=migrator t=2025-06-16T18:32:48.806992677Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-16T18:32:48.816392964Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=9.406647ms grafana | logger=migrator t=2025-06-16T18:32:48.822585434Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-16T18:32:48.832192481Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=9.606447ms grafana | logger=migrator t=2025-06-16T18:32:48.835302567Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-16T18:32:48.844807423Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=9.498636ms grafana | logger=migrator t=2025-06-16T18:32:48.878857009Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-16T18:32:48.89145397Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=12.598191ms grafana | logger=migrator t=2025-06-16T18:32:48.896862334Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-16T18:32:48.905346343Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=8.482949ms grafana | logger=migrator t=2025-06-16T18:32:48.919101504Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-16T18:32:48.930169163Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=11.067129ms grafana | logger=migrator t=2025-06-16T18:32:48.937368792Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-16T18:32:48.938586561Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=1.219249ms grafana | logger=migrator t=2025-06-16T18:32:48.941894648Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-16T18:32:48.977433225Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=35.532847ms grafana | logger=migrator t=2025-06-16T18:32:48.987005452Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-16T18:32:48.997501148Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=10.494476ms grafana | logger=migrator t=2025-06-16T18:32:49.001645581Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-16T18:32:49.012007655Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=10.360784ms grafana | logger=migrator t=2025-06-16T18:32:49.016996644Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-16T18:32:49.02873161Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=11.737856ms grafana | logger=migrator t=2025-06-16T18:32:49.034408346Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-16T18:32:49.043294327Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=8.884821ms grafana | logger=migrator t=2025-06-16T18:32:49.049958171Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-16T18:32:49.049973781Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=15.88µs grafana | logger=migrator t=2025-06-16T18:32:49.053171617Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-16T18:32:49.053184227Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=13.27µs grafana | logger=migrator t=2025-06-16T18:32:49.05729446Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-16T18:32:49.069851801Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=12.557531ms grafana | logger=migrator t=2025-06-16T18:32:49.07588198Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T18:32:49.085548918Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.666298ms grafana | logger=migrator t=2025-06-16T18:32:49.089004676Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-16T18:32:49.08945654Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=451.204µs grafana | logger=migrator t=2025-06-16T18:32:49.093989046Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-16T18:32:49.094238289Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=248.583µs grafana | logger=migrator t=2025-06-16T18:32:49.099006927Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-16T18:32:49.111259226Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=12.252689ms grafana | logger=migrator t=2025-06-16T18:32:49.11666024Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T18:32:49.125212729Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=8.543939ms grafana | logger=migrator t=2025-06-16T18:32:49.128260773Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-16T18:32:49.137812681Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=9.551968ms grafana | logger=migrator t=2025-06-16T18:32:49.142342367Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-16T18:32:49.15140872Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=9.065213ms grafana | logger=migrator t=2025-06-16T18:32:49.157191287Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-16T18:32:49.157727471Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=535.884µs grafana | logger=migrator t=2025-06-16T18:32:49.160948437Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-16T18:32:49.172334959Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=11.385652ms grafana | logger=migrator t=2025-06-16T18:32:49.183312237Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T18:32:49.195309555Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=11.997468ms grafana | logger=migrator t=2025-06-16T18:32:49.234219699Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-16T18:32:49.234850384Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=630.725µs grafana | logger=migrator t=2025-06-16T18:32:49.238940627Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-16T18:32:49.239707943Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=777.586µs grafana | logger=migrator t=2025-06-16T18:32:49.243324172Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-16T18:32:49.245227797Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.902955ms grafana | logger=migrator t=2025-06-16T18:32:49.248783486Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-16T18:32:49.248809647Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=27.231µs grafana | logger=migrator t=2025-06-16T18:32:49.255750293Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-16T18:32:49.255776713Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=27.35µs grafana | logger=migrator t=2025-06-16T18:32:49.259274511Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-16T18:32:49.260012547Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=737.106µs grafana | logger=migrator t=2025-06-16T18:32:49.265506662Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T18:32:49.276237368Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=10.729736ms grafana | logger=migrator t=2025-06-16T18:32:49.280937236Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-16T18:32:49.290246771Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=9.309465ms grafana | logger=migrator t=2025-06-16T18:32:49.293276226Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-16T18:32:49.293968891Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=692.455µs grafana | logger=migrator t=2025-06-16T18:32:49.296922515Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-16T18:32:49.297773362Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=849.647µs grafana | logger=migrator t=2025-06-16T18:32:49.305927698Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-16T18:32:49.312988575Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=7.059887ms grafana | logger=migrator t=2025-06-16T18:32:49.315746357Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T18:32:49.322919135Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=7.172188ms grafana | logger=migrator t=2025-06-16T18:32:49.325834459Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-16T18:32:49.325852679Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-16T18:32:49.32601345Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-16T18:32:49.32602896Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=194.681µs grafana | logger=migrator t=2025-06-16T18:32:49.329297436Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-16T18:32:49.330350125Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=1.045569ms grafana | logger=migrator t=2025-06-16T18:32:49.339293098Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-16T18:32:49.341761717Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.468129ms grafana | logger=migrator t=2025-06-16T18:32:49.345281486Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-16T18:32:49.347504433Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=2.222427ms grafana | logger=migrator t=2025-06-16T18:32:49.352820246Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-16T18:32:49.354162927Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.342281ms grafana | logger=migrator t=2025-06-16T18:32:49.360515169Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-16T18:32:49.363023269Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=2.46843ms grafana | logger=migrator t=2025-06-16T18:32:49.36817127Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-16T18:32:49.378591305Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=10.419695ms grafana | logger=migrator t=2025-06-16T18:32:49.382057942Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-16T18:32:49.391301518Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=9.238015ms grafana | logger=migrator t=2025-06-16T18:32:49.396301797Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-16T18:32:49.408743528Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=12.442451ms grafana | logger=migrator t=2025-06-16T18:32:49.413814819Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-16T18:32:49.423471687Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=9.656728ms grafana | logger=migrator t=2025-06-16T18:32:49.42752861Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-16T18:32:49.427785332Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-16T18:32:49.427833472Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=303.412µs grafana | logger=migrator t=2025-06-16T18:32:49.431062238Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-16T18:32:49.432318168Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.25558ms grafana | logger=migrator t=2025-06-16T18:32:49.436842345Z level=info msg="migrations completed" performed=654 skipped=0 duration=5.756607374s grafana | logger=migrator t=2025-06-16T18:32:49.438180576Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-16T18:32:49.455112462Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-16T18:32:49.455474316Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-16T18:32:49.463000746Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T18:32:49.561815445Z level=info msg="Restored cache from database" duration=478.775µs grafana | logger=resource-migrator t=2025-06-16T18:32:49.570167012Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-16T18:32:49.570186642Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-16T18:32:49.577785723Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-16T18:32:49.5785521Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=768.187µs grafana | logger=resource-migrator t=2025-06-16T18:32:49.584715699Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-16T18:32:49.584731789Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=16.44µs grafana | logger=resource-migrator t=2025-06-16T18:32:49.592866735Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-16T18:32:49.592947276Z level=info msg="Migration successfully executed" id="drop table resource" duration=80.701µs grafana | logger=resource-migrator t=2025-06-16T18:32:49.599472488Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-16T18:32:49.601141272Z level=info msg="Migration successfully executed" id="create table resource" duration=1.664634ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.606475394Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-16T18:32:49.608449961Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.974037ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.612887837Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-16T18:32:49.612989627Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=101.43µs grafana | logger=resource-migrator t=2025-06-16T18:32:49.618956695Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-16T18:32:49.620607449Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.650034ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.626146574Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-16T18:32:49.627481744Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.3311ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.630460509Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-16T18:32:49.631673798Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.210079ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.636963821Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-16T18:32:49.637190662Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=230.381µs grafana | logger=resource-migrator t=2025-06-16T18:32:49.64553527Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-16T18:32:49.647064282Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.528502ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.650749892Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-16T18:32:49.651935292Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.1843ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.655776953Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-16T18:32:49.655860984Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=84.431µs grafana | logger=resource-migrator t=2025-06-16T18:32:49.659610223Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-16T18:32:49.661464579Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.854026ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.668975419Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-16T18:32:49.671310268Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=2.333719ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.674307603Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-16T18:32:49.675587953Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.27976ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.679346133Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-16T18:32:49.689801568Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=10.454185ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.695161591Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-16T18:32:49.704585247Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=9.423766ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.708270827Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-16T18:32:49.709170634Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=899.647µs grafana | logger=resource-migrator t=2025-06-16T18:32:49.712261479Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-16T18:32:49.713192787Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=930.868µs grafana | logger=resource-migrator t=2025-06-16T18:32:49.71983788Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-16T18:32:49.730049063Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.210613ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.732919196Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-16T18:32:49.741497895Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=8.577549ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.744449059Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-16T18:32:49.744479829Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-16T18:32:49.744921122Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=472.013µs grafana | logger=resource-migrator t=2025-06-16T18:32:49.750591648Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-16T18:32:49.751916899Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.324721ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.756792729Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-16T18:32:49.767575036Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=10.781637ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.771124364Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-16T18:32:49.772175893Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=1.051109ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.775163907Z level=info msg="migrations completed" performed=26 skipped=0 duration=197.448444ms grafana | logger=resource-migrator t=2025-06-16T18:32:49.77561574Z level=info msg="Unlocking database" grafana | t=2025-06-16T18:32:49.775837672Z level=info caller=logger.go:214 time=2025-06-16T18:32:49.775816632Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-16T18:32:49.786588399Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-16T18:32:49.831786654Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-16T18:32:49.831815224Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-16T18:32:49.831915695Z level=info msg="Plugins loaded" count=53 duration=45.330466ms grafana | logger=query_data t=2025-06-16T18:32:49.837810053Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-16T18:32:49.8424232Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-16T18:32:49.855723737Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-16T18:32:49.864870851Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-16T18:32:49.864892741Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-16T18:32:49.867678674Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:49.868417951Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=ngalert.state.manager t=2025-06-16T18:32:49.869157776Z level=info msg="Warming state cache for startup" grafana | logger=grafanaStorageLogger t=2025-06-16T18:32:49.869875232Z level=info msg="Storage starting" grafana | logger=ngalert.multiorg.alertmanager t=2025-06-16T18:32:49.871159822Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=http.server t=2025-06-16T18:32:49.871686127Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=provisioning.datasources t=2025-06-16T18:32:49.971039299Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=plugins.update.checker t=2025-06-16T18:32:49.973575799Z level=info msg="Update check succeeded" duration=105.24ms grafana | logger=grafana.update.checker t=2025-06-16T18:32:49.976038029Z level=info msg="Update check succeeded" duration=107.210525ms grafana | logger=sqlstore.transactions t=2025-06-16T18:32:49.983191607Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T18:32:50.019180537Z level=info msg="Patterns update finished" duration=149.622778ms grafana | logger=ngalert.state.manager t=2025-06-16T18:32:50.025282537Z level=info msg="State cache has been initialized" states=0 duration=156.125621ms grafana | logger=ngalert.scheduler t=2025-06-16T18:32:50.025337057Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-16T18:32:50.025455858Z level=info msg=starting first_tick=2025-06-16T18:33:00Z grafana | logger=provisioning.alerting t=2025-06-16T18:32:50.084786857Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-16T18:32:50.084821377Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-16T18:32:50.086270238Z level=info msg="starting to provision dashboards" grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.265802097Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.266663084Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.267944435Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.269489187Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.270074591Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.270649277Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.273297978Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.273845202Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.274330845Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-16T18:32:50.324575541Z level=info msg="app registry initialized" grafana | logger=plugin.installer t=2025-06-16T18:32:50.34432456Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-16T18:32:50.419192395Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-16T18:32:50.447794155Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:50.447818075Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=579.377184ms grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:50.447846196Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=plugin.installer t=2025-06-16T18:32:50.719317356Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=provisioning.dashboard t=2025-06-16T18:32:50.819457875Z level=info msg="finished to provision dashboards" grafana | logger=installer.fs t=2025-06-16T18:32:50.852604631Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-16T18:32:50.878873753Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:50.878900304Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=431.049788ms grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:50.878925914Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-16T18:32:51.051016732Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-16T18:32:51.107346426Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-16T18:32:51.123012643Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:51.123035214Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=244.10503ms grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:51.123059534Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=plugin.installer t=2025-06-16T18:32:51.305441444Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-16T18:32:51.360265166Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-16T18:32:51.376998931Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:51.377019061Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=253.954717ms grafana | logger=infra.usagestats t=2025-06-16T18:34:28.878946113Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-16 18:32:40,538] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,538] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,538] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,538] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,538] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,538] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,538] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,538] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,539] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,539] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,539] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,539] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,539] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,539] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,539] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,539] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,539] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,539] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,542] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@221af3c0 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,545] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-16 18:32:40,549] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-16 18:32:40,561] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 18:32:40,573] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 18:32:40,574] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 18:32:40,581] INFO Socket connection established, initiating session, client: /172.17.0.5:43366, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 18:32:40,604] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000026cbd0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 18:32:40,732] INFO Session: 0x10000026cbd0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:40,732] INFO EventThread shut down for session: 0x10000026cbd0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-16 18:32:41,438] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-16 18:32:41,752] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-16 18:32:41,826] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-16 18:32:41,827] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-16 18:32:41,828] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-16 18:32:41,840] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-16 18:32:41,844] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,844] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,846] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@5d8bafa9 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 18:32:41,849] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-16 18:32:41,854] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 18:32:41,857] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-16 18:32:41,858] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 18:32:41,862] INFO Socket connection established, initiating session, client: /172.17.0.5:50420, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 18:32:41,872] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000026cbd0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 18:32:41,878] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-16 18:32:42,193] INFO Cluster ID = DURHhdNSQwy0Fksygi2p2A (kafka.server.KafkaServer) kafka | [2025-06-16 18:32:42,197] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-16 18:32:42,253] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-16 18:32:42,290] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-16 18:32:42,296] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-16 18:32:42,295] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-16 18:32:42,291] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-16 18:32:42,333] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-16 18:32:42,335] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-16 18:32:42,349] INFO Loaded 0 logs in 16ms. (kafka.log.LogManager) kafka | [2025-06-16 18:32:42,349] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-16 18:32:42,351] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-16 18:32:42,367] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-16 18:32:42,410] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-16 18:32:42,425] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-16 18:32:42,441] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-16 18:32:42,489] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-16 18:32:42,844] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-16 18:32:42,851] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-16 18:32:42,882] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-16 18:32:42,883] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-16 18:32:42,883] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-16 18:32:42,887] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-16 18:32:42,892] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-16 18:32:42,907] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 18:32:42,909] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 18:32:42,911] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 18:32:42,914] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 18:32:42,929] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-16 18:32:42,953] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-16 18:32:42,981] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750098762965,1750098762965,1,0,0,72057604452188161,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-16 18:32:42,982] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-16 18:32:43,034] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-16 18:32:43,040] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 18:32:43,045] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 18:32:43,046] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 18:32:43,059] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:32:43,079] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-16 18:32:43,084] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:32:43,090] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,094] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,098] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-16 18:32:43,101] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-16 18:32:43,109] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-16 18:32:43,126] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-16 18:32:43,142] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-16 18:32:43,142] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,148] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 18:32:43,153] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,156] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,158] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,166] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-16 18:32:43,174] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-16 18:32:43,175] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,180] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,194] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-16 18:32:43,200] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-16 18:32:43,200] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-16 18:32:43,200] INFO Kafka startTimeMs: 1750098763191 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-16 18:32:43,201] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-16 18:32:43,214] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-16 18:32:43,214] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,215] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,215] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,215] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,219] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,219] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,220] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,220] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-16 18:32:43,221] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,225] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-16 18:32:43,239] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-16 18:32:43,240] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-16 18:32:43,260] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-16 18:32:43,261] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-16 18:32:43,262] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-16 18:32:43,262] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-16 18:32:43,263] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-16 18:32:43,266] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-16 18:32:43,266] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,274] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,275] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,275] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,275] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,276] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,293] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:43,342] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 18:32:43,397] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-16 18:32:43,410] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-16 18:32:48,295] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-16 18:32:48,296] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-16 18:33:15,858] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-16 18:33:15,865] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-16 18:33:15,867] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-16 18:33:15,869] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-16 18:33:15,903] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(2hru3UDlRbuucjCtqV3rFg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(usxhw3kjTnCdSwJakDLH4w),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-16 18:33:15,904] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,916] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,916] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,916] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,916] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:15,919] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,929] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,929] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,929] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,929] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,929] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:15,929] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 18:33:16,065] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,065] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,065] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,065] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,065] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:16,077] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-16 18:33:16,077] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-16 18:33:16,077] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-16 18:33:16,077] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-16 18:33:16,077] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-16 18:33:16,077] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-16 18:33:16,081] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-16 18:33:16,081] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-16 18:33:16,081] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-16 18:33:16,081] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-16 18:33:16,081] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-16 18:33:16,081] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-16 18:33:16,082] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-16 18:33:16,086] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:16,090] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 18:33:16,093] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-16 18:33:16,094] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,094] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,099] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,099] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,099] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,099] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,099] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-16 18:33:16,137] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-16 18:33:16,137] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-16 18:33:16,137] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-16 18:33:16,137] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-16 18:33:16,138] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-16 18:33:16,139] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-16 18:33:16,203] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,216] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,218] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,219] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,220] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,239] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,240] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,240] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,240] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,240] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,249] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,249] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,249] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,249] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,249] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,259] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,260] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,260] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,260] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,260] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,271] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,272] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,272] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,272] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,272] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,280] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,281] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,281] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,281] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,281] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,289] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,290] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,290] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,290] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,290] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,298] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,299] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,299] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,299] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,299] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,307] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,307] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,307] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,307] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,308] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,317] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,317] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,317] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,317] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,317] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,325] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,326] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,326] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,326] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,327] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,337] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,338] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,338] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,338] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,338] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,347] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,348] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,348] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,348] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,348] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,359] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,363] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,363] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,363] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,363] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,401] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,409] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,409] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,409] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,409] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,418] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,419] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,420] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,421] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,421] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,430] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,431] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,431] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,431] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,431] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,438] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,439] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,439] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,439] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,440] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,447] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,448] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,448] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,448] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,448] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,460] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,461] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,461] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,461] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,461] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,469] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,470] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,470] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,470] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,470] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,479] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,479] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,479] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,479] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,480] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,490] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,491] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,491] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,491] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,491] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,499] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,499] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,499] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,499] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,500] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,513] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,514] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,514] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,514] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,515] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,527] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,528] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,528] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,528] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,528] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,534] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,535] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,535] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,535] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,535] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,542] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,542] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,542] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,542] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,543] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,551] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,552] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,552] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,552] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,553] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,560] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,560] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,560] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,560] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,560] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,567] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,568] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,568] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,568] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,568] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,575] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,575] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,575] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,575] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,575] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,583] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,583] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,583] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,583] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,584] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,590] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,590] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,591] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,591] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,591] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,599] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,600] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,600] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,600] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,600] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(2hru3UDlRbuucjCtqV3rFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,608] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,609] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,609] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,609] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,609] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,616] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,616] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,616] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,616] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,617] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,623] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,624] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,624] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,624] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,624] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,631] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,633] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,633] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,633] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,633] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,644] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,645] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,645] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,645] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,645] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,653] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,654] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,654] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,654] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,654] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,661] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,662] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,662] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,662] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,662] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,671] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,671] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,671] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,671] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,671] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,677] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,678] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,678] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,678] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,678] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,685] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,686] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,686] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,686] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,686] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,694] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,694] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,694] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,694] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,695] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,701] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,702] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,702] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,702] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,702] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,708] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,709] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,709] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,709] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,709] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,716] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,717] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,717] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,717] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,717] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,728] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,728] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,728] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,728] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,728] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,735] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:16,736] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 18:33:16,736] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,736] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:16,736] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:16,743] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-16 18:33:16,752] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,753] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,759] INFO [Broker id=1] Finished LeaderAndIsr request in 666ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-16 18:33:16,761] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,765] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=usxhw3kjTnCdSwJakDLH4w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=2hru3UDlRbuucjCtqV3rFg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 18:33:16,766] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,766] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,766] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,766] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,767] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,767] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,767] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,767] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,767] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,767] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,768] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,768] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,768] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,768] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,768] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,768] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,769] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,769] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,769] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,769] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,769] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,770] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,770] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,770] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,771] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,772] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,772] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,772] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,772] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,772] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,773] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,773] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,773] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,773] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,774] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,774] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,774] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,774] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,774] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,775] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,775] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,775] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,775] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,775] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,776] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,777] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,777] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,777] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,777] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 18:33:16,777] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,778] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:16,778] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 18:33:16,778] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 18:33:17,354] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:17,371] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:17,687] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 78a84e9c-9f41-4395-81a2-9a0b7c619942 in Empty state. Created a new member id consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:17,690] INFO [GroupCoordinator 1]: Preparing to rebalance group 78a84e9c-9f41-4395-81a2-9a0b7c619942 in state PreparingRebalance with old generation 0 (__consumer_offsets-49) (reason: Adding new member consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0 with group instance id None; client reason: need to re-join with the given member-id: consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:17,795] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 183ef33a-1420-47be-a802-23c79d9c9b0a in Empty state. Created a new member id consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:17,800] INFO [GroupCoordinator 1]: Preparing to rebalance group 183ef33a-1420-47be-a802-23c79d9c9b0a in state PreparingRebalance with old generation 0 (__consumer_offsets-34) (reason: Adding new member consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf with group instance id None; client reason: need to re-join with the given member-id: consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:20,383] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:20,412] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:20,691] INFO [GroupCoordinator 1]: Stabilized group 78a84e9c-9f41-4395-81a2-9a0b7c619942 generation 1 (__consumer_offsets-49) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:20,697] INFO [GroupCoordinator 1]: Assignment received from leader consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0 for group 78a84e9c-9f41-4395-81a2-9a0b7c619942 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:20,801] INFO [GroupCoordinator 1]: Stabilized group 183ef33a-1420-47be-a802-23c79d9c9b0a generation 1 (__consumer_offsets-34) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:20,818] INFO [GroupCoordinator 1]: Assignment received from leader consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf for group 183ef33a-1420-47be-a802-23c79d9c9b0a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:33:22,794] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-16 18:33:22,812] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(Nwn68w-mQdek3Bxx6PjCxw),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-16 18:33:22,812] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) kafka | [2025-06-16 18:33:22,812] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 18:33:22,812] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 18:33:22,812] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 18:33:22,812] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 18:33:22,818] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 18:33:22,818] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) kafka | [2025-06-16 18:33:22,818] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-16 18:33:22,818] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-16 18:33:22,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 18:33:22,818] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 18:33:22,819] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-16 18:33:22,819] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 18:33:22,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-16 18:33:22,826] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-16 18:33:22,826] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-16 18:33:22,830] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 18:33:22,832] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-16 18:33:22,833] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:22,833] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 18:33:22,833] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(Nwn68w-mQdek3Bxx6PjCxw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 18:33:22,836] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-16 18:33:22,837] INFO [Broker id=1] Finished LeaderAndIsr request in 18ms correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-16 18:33:22,838] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Nwn68w-mQdek3Bxx6PjCxw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 18:33:22,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 18:33:22,839] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 18:33:22,841] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 18:34:56,310] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-aefb4a2a-e212-41cc-907d-9c7f686b26b8 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:34:56,312] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-aefb4a2a-e212-41cc-907d-9c7f686b26b8 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:34:59,313] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:34:59,316] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-aefb4a2a-e212-41cc-907d-9c7f686b26b8 for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:34:59,436] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-aefb4a2a-e212-41cc-907d-9c7f686b26b8 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:34:59,437] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 18:34:59,439] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-aefb4a2a-e212-41cc-907d-9c7f686b26b8, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-16T18:32:55.368+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-16T18:32:55.468+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 37 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-16T18:32:55.469+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-16T18:32:56.855+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-16T18:32:57.031+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 164 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-16T18:32:57.701+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-16T18:32:57.719+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-16T18:32:57.725+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-16T18:32:57.725+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-16T18:32:57.765+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-16T18:32:57.766+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2236 ms policy-api | [2025-06-16T18:32:58.088+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-16T18:32:58.170+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-16T18:32:58.218+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-16T18:32:58.622+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-16T18:32:58.660+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-16T18:32:58.860+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@5ba36c83 policy-api | [2025-06-16T18:32:58.861+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-16T18:32:58.945+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-16T18:33:00.840+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-16T18:33:00.843+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-16T18:33:01.455+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-16T18:33:02.300+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-16T18:33:03.306+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-16T18:33:03.354+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-16T18:33:03.989+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-16T18:33:04.121+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-16T18:33:04.139+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-16T18:33:04.161+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.519 seconds (process running for 10.087) policy-api | [2025-06-16T18:33:39.917+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-16T18:33:39.917+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-16T18:33:39.919+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2025-06-16T18:34:31.975+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: policy-api | [] policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | MakeTopics :: Creates the Policy topics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteXacmlPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | policy-csit | 4 tests, 4 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | policy-csit | 6 tests, 6 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.3) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:42.729788 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:42.778712 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:42.835044 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:42.881425 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:42.931402 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:42.982311 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.032239 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.100787 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.144362 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.19696 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.241597 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.285381 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.328382 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.37513 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.421232 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.483944 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.53237 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.582542 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.631682 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.685418 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.731343 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.773959 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.824842 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.874134 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.91748 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.968504 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.013279 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.064558 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.10737 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.16687 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.218019 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.273116 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.321444 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.373385 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.426817 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.474207 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.547337 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.598328 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.649959 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.705655 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.755656 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.804441 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.872891 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.926318 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.982289 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.034127 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.083713 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.138754 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.233416 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.291964 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.351934 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.399957 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.452992 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.503087 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.556646 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.628047 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.679306 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.729551 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.778622 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.831427 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.88747 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.953201 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.001404 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.055133 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.116947 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.168174 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.217734 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.301361 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.351572 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.398258 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.447654 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.500791 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.558256 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.60622 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.693882 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.744883 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.797301 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.847832 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.910437 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.9626 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.037295 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.093457 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.14473 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.200938 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.253472 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.308058 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.418434 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.470413 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.522861 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.582588 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.639163 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.690628 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.771759 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.820117 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.870217 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.921619 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:47.969917 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.026913 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.117272 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.170948 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.238929 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.298754 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.358109 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.412303 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.547342 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.607863 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.667425 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.727661 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.791013 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:48.841733 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:48.929518 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:48.982024 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:49.039596 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:49.089575 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:49.143935 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:49.197087 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:49.286823 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:49.338091 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1606251832421100u | 1 | 2025-06-16 18:32:49.385878 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1606251832421200u | 1 | 2025-06-16 18:32:49.435792 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1606251832421200u | 1 | 2025-06-16 18:32:49.489592 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1606251832421200u | 1 | 2025-06-16 18:32:49.542731 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1606251832421200u | 1 | 2025-06-16 18:32:49.605941 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1606251832421300u | 1 | 2025-06-16 18:32:49.65723 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1606251832421300u | 1 | 2025-06-16 18:32:49.709884 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1606251832421300u | 1 | 2025-06-16 18:32:49.759399 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.420857 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.477401 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.536619 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.594332 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.667999 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.723769 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.773141 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.829067 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.87523 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.930067 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:51.012741 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:51.057869 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:51.109919 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.158143 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.2067 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.269058 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.319714 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.398322 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.444804 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.490047 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.536726 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1606251832501600u | 1 | 2025-06-16 18:32:51.589523 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1606251832501600u | 1 | 2025-06-16 18:32:51.638528 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1606251832501601u | 1 | 2025-06-16 18:32:51.688593 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1606251832501601u | 1 | 2025-06-16 18:32:51.736929 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1606251832501700u | 1 | 2025-06-16 18:32:51.792273 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1606251832501700u | 1 | 2025-06-16 18:32:51.846888 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1606251832501700u | 1 | 2025-06-16 18:32:51.898778 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:51.950259 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.000805 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.059334 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.110635 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.15733 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.199257 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.244132 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.287551 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.332469 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1606251832521600u | 1 | 2025-06-16 18:32:52.976747 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1606251832531600u | 1 | 2025-06-16 18:32:53.568179 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1606251832531600u | 1 | 2025-06-16 18:32:53.623566 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-pap | Waiting for api port 6969... policy-pap | Waiting for kafka port 9092... policy-pap | api (172.17.0.7:6969) open policy-pap | kafka (172.17.0.5:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-16T18:33:06.713+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 59 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-16T18:33:06.714+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-16T18:33:08.085+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-16T18:33:08.179+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 84 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-16T18:33:09.080+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-16T18:33:09.093+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-16T18:33:09.095+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-16T18:33:09.095+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-16T18:33:09.146+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-16T18:33:09.146+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2377 ms policy-pap | [2025-06-16T18:33:09.565+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-16T18:33:09.642+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-16T18:33:09.699+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-16T18:33:10.127+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-16T18:33:10.175+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-16T18:33:10.385+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6e337ba1 policy-pap | [2025-06-16T18:33:10.387+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-16T18:33:10.492+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-16T18:33:12.432+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-16T18:33:12.436+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-16T18:33:13.580+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 78a84e9c-9f41-4395-81a2-9a0b7c619942 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-16T18:33:13.632+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T18:33:13.761+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T18:33:13.762+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T18:33:13.762+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098793760 policy-pap | [2025-06-16T18:33:13.764+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-1, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-16T18:33:13.764+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-16T18:33:13.765+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T18:33:13.772+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T18:33:13.772+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T18:33:13.772+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098793772 policy-pap | [2025-06-16T18:33:13.772+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-16T18:33:14.105+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=xacml, supportedPolicyTypes=[onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0, onap.policies.monitoring.* 1.0.0, onap.policies.optimization.* 1.0.0, onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0, onap.policies.native.Xacml 1.0.0, onap.policies.Naming 1.0.0, onap.policies.match.* 1.0.0], policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-16T18:33:14.219+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-16T18:33:14.291+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-16T18:33:14.529+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-16T18:33:15.209+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-16T18:33:15.308+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-16T18:33:15.335+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-16T18:33:15.355+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-16T18:33:15.355+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-16T18:33:15.356+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-16T18:33:15.356+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-16T18:33:15.356+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-16T18:33:15.357+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-16T18:33:15.357+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-16T18:33:15.358+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=78a84e9c-9f41-4395-81a2-9a0b7c619942, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2e7e9897 policy-pap | [2025-06-16T18:33:15.368+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=78a84e9c-9f41-4395-81a2-9a0b7c619942, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-16T18:33:15.369+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 78a84e9c-9f41-4395-81a2-9a0b7c619942 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-16T18:33:15.369+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T18:33:15.376+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T18:33:15.376+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T18:33:15.376+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098795376 policy-pap | [2025-06-16T18:33:15.376+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-16T18:33:15.377+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-16T18:33:15.377+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=2ce83089-4029-40c4-8165-ced703c1674c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1a958d2a policy-pap | [2025-06-16T18:33:15.377+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=2ce83089-4029-40c4-8165-ced703c1674c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-16T18:33:15.377+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-16T18:33:15.377+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098795383 policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=2ce83089-4029-40c4-8165-ced703c1674c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=78a84e9c-9f41-4395-81a2-9a0b7c619942, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d86b2547-e3e9-4d78-9a7e-14c8aadfd29d, alive=false, publisher=null]]: starting policy-pap | [2025-06-16T18:33:15.394+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-16T18:33:15.395+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T18:33:15.406+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-16T18:33:15.421+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T18:33:15.421+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T18:33:15.421+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098795421 policy-pap | [2025-06-16T18:33:15.422+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d86b2547-e3e9-4d78-9a7e-14c8aadfd29d, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-16T18:33:15.422+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b38f4fd3-1c37-4a8a-8424-78fd1d3ae126, alive=false, publisher=null]]: starting policy-pap | [2025-06-16T18:33:15.423+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-16T18:33:15.423+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T18:33:15.424+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-16T18:33:15.427+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T18:33:15.427+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T18:33:15.427+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098795427 policy-pap | [2025-06-16T18:33:15.428+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b38f4fd3-1c37-4a8a-8424-78fd1d3ae126, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-16T18:33:15.428+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-16T18:33:15.428+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-16T18:33:15.429+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-16T18:33:15.430+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-16T18:33:15.432+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-16T18:33:15.432+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-16T18:33:15.433+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-16T18:33:15.433+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-16T18:33:15.433+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-16T18:33:15.433+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-16T18:33:15.434+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-16T18:33:15.434+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.482 seconds (process running for 10.056) policy-pap | [2025-06-16T18:33:15.843+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: DURHhdNSQwy0Fksygi2p2A policy-pap | [2025-06-16T18:33:15.844+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-16T18:33:15.844+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Cluster ID: DURHhdNSQwy0Fksygi2p2A policy-pap | [2025-06-16T18:33:15.845+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: DURHhdNSQwy0Fksygi2p2A policy-pap | [2025-06-16T18:33:15.869+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-16T18:33:15.869+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-16T18:33:15.892+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T18:33:15.892+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: DURHhdNSQwy0Fksygi2p2A policy-pap | [2025-06-16T18:33:16.031+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-16T18:33:16.064+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T18:33:16.273+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T18:33:16.273+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-16T18:33:16.634+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T18:33:16.746+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T18:33:17.327+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-16T18:33:17.333+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-16T18:33:17.362+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9 policy-pap | [2025-06-16T18:33:17.362+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-16T18:33:17.681+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-16T18:33:17.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] (Re-)joining group policy-pap | [2025-06-16T18:33:17.688+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Request joining group due to: need to re-join with the given member-id: consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0 policy-pap | [2025-06-16T18:33:17.688+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] (Re-)joining group policy-pap | [2025-06-16T18:33:20.387+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9', protocol='range'} policy-pap | [2025-06-16T18:33:20.398+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-16T18:33:20.429+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9', protocol='range'} policy-pap | [2025-06-16T18:33:20.431+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-16T18:33:20.436+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-16T18:33:20.459+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-16T18:33:20.482+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-16T18:33:20.693+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Successfully joined group with generation Generation{generationId=1, memberId='consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0', protocol='range'} policy-pap | [2025-06-16T18:33:20.694+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Finished assignment for group at generation 1: {consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-16T18:33:20.700+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Successfully synced group in generation Generation{generationId=1, memberId='consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0', protocol='range'} policy-pap | [2025-06-16T18:33:20.701+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-16T18:33:20.701+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-16T18:33:20.703+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-16T18:33:20.705+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-16T18:33:21.953+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-16T18:33:21.954+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"d31f15e3-8200-426a-9c05-c67231bf3e73","timestampMs":1750098797398,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4"} policy-pap | [2025-06-16T18:33:21.954+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"d31f15e3-8200-426a-9c05-c67231bf3e73","timestampMs":1750098797398,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4"} policy-pap | [2025-06-16T18:33:21.956+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK policy-pap | [2025-06-16T18:33:21.957+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_TOPIC_CHECK policy-pap | [2025-06-16T18:33:22.010+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"421a3372-5f8e-464d-b798-a50b4b48cf6c","timestampMs":1750098801958,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup"} policy-pap | [2025-06-16T18:33:22.011+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"421a3372-5f8e-464d-b798-a50b4b48cf6c","timestampMs":1750098801958,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup"} policy-pap | [2025-06-16T18:33:22.017+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-16T18:33:22.620+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting policy-pap | [2025-06-16T18:33:22.620+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting listener policy-pap | [2025-06-16T18:33:22.620+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting timer policy-pap | [2025-06-16T18:33:22.620+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=dbb93529-7620-483d-89b0-797ac3cb8b31, expireMs=1750098832620] policy-pap | [2025-06-16T18:33:22.621+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting enqueue policy-pap | [2025-06-16T18:33:22.622+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate started policy-pap | [2025-06-16T18:33:22.622+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=dbb93529-7620-483d-89b0-797ac3cb8b31, expireMs=1750098832620] policy-pap | [2025-06-16T18:33:22.625+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"dbb93529-7620-483d-89b0-797ac3cb8b31","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:22.659+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"dbb93529-7620-483d-89b0-797ac3cb8b31","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:22.661+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T18:33:22.662+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"dbb93529-7620-483d-89b0-797ac3cb8b31","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:22.662+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T18:33:22.764+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"dbb93529-7620-483d-89b0-797ac3cb8b31","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"90b3f482-cbc9-4416-b421-d6129b5f10b4","timestampMs":1750098802750,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:22.765+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping policy-pap | [2025-06-16T18:33:22.765+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping enqueue policy-pap | [2025-06-16T18:33:22.765+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping timer policy-pap | [2025-06-16T18:33:22.765+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=dbb93529-7620-483d-89b0-797ac3cb8b31, expireMs=1750098832620] policy-pap | [2025-06-16T18:33:22.766+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping listener policy-pap | [2025-06-16T18:33:22.766+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopped policy-pap | [2025-06-16T18:33:22.768+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"dbb93529-7620-483d-89b0-797ac3cb8b31","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"90b3f482-cbc9-4416-b421-d6129b5f10b4","timestampMs":1750098802750,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:22.769+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id dbb93529-7620-483d-89b0-797ac3cb8b31 policy-pap | [2025-06-16T18:33:22.772+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"504884f8-f384-4692-b040-357f65737559","timestampMs":1750098802756,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:22.782+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.Naming","policy-type-version":"1.0.0","policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-16T18:33:22.783+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate successful policy-pap | [2025-06-16T18:33:22.783+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 start publishing next request policy-pap | [2025-06-16T18:33:22.783+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange starting policy-pap | [2025-06-16T18:33:22.783+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange starting listener policy-pap | [2025-06-16T18:33:22.784+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange starting timer policy-pap | [2025-06-16T18:33:22.784+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=100c0bdc-0836-4c51-8f89-991d9512ea35, expireMs=1750098832784] policy-pap | [2025-06-16T18:33:22.784+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange starting enqueue policy-pap | [2025-06-16T18:33:22.784+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=100c0bdc-0836-4c51-8f89-991d9512ea35, expireMs=1750098832784] policy-pap | [2025-06-16T18:33:22.784+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange started policy-pap | [2025-06-16T18:33:22.785+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"100c0bdc-0836-4c51-8f89-991d9512ea35","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:22.807+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T18:33:23.120+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"504884f8-f384-4692-b040-357f65737559","timestampMs":1750098802756,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:23.121+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-16T18:33:23.125+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"100c0bdc-0836-4c51-8f89-991d9512ea35","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:23.125+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-16T18:33:23.125+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"100c0bdc-0836-4c51-8f89-991d9512ea35","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"39a1e321-3725-4f00-b036-713652cd70c3","timestampMs":1750098802800,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:23.376+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange stopping policy-pap | [2025-06-16T18:33:23.376+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange stopping enqueue policy-pap | [2025-06-16T18:33:23.376+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange stopping timer policy-pap | [2025-06-16T18:33:23.376+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=100c0bdc-0836-4c51-8f89-991d9512ea35, expireMs=1750098832784] policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange stopping listener policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange stopped policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange successful policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 start publishing next request policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting listener policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting timer policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=10aa937b-f7d1-4c76-92ce-87031228576d, expireMs=1750098833377] policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting enqueue policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate started policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"10aa937b-f7d1-4c76-92ce-87031228576d","timestampMs":1750098803112,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:23.383+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"100c0bdc-0836-4c51-8f89-991d9512ea35","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:23.383+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-16T18:33:23.387+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"10aa937b-f7d1-4c76-92ce-87031228576d","timestampMs":1750098803112,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:23.387+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T18:33:23.390+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"100c0bdc-0836-4c51-8f89-991d9512ea35","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"39a1e321-3725-4f00-b036-713652cd70c3","timestampMs":1750098802800,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:23.390+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 100c0bdc-0836-4c51-8f89-991d9512ea35 policy-pap | [2025-06-16T18:33:23.400+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"10aa937b-f7d1-4c76-92ce-87031228576d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"05bc3ec8-2c2e-4f60-9242-cc6c3fc1f912","timestampMs":1750098803388,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:23.401+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping policy-pap | [2025-06-16T18:33:23.401+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping enqueue policy-pap | [2025-06-16T18:33:23.401+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping timer policy-pap | [2025-06-16T18:33:23.401+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=10aa937b-f7d1-4c76-92ce-87031228576d, expireMs=1750098833377] policy-pap | [2025-06-16T18:33:23.401+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping listener policy-pap | [2025-06-16T18:33:23.401+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopped policy-pap | [2025-06-16T18:33:23.402+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"10aa937b-f7d1-4c76-92ce-87031228576d","timestampMs":1750098803112,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:23.403+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T18:33:23.406+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate successful policy-pap | [2025-06-16T18:33:23.406+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 has no more requests policy-pap | [2025-06-16T18:33:23.407+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"10aa937b-f7d1-4c76-92ce-87031228576d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"05bc3ec8-2c2e-4f60-9242-cc6c3fc1f912","timestampMs":1750098803388,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:33:23.408+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 10aa937b-f7d1-4c76-92ce-87031228576d policy-pap | [2025-06-16T18:33:41.610+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-16T18:33:41.610+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-16T18:33:41.612+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-pap | [2025-06-16T18:33:52.620+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=dbb93529-7620-483d-89b0-797ac3cb8b31, expireMs=1750098832620] policy-pap | [2025-06-16T18:33:52.784+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=100c0bdc-0836-4c51-8f89-991d9512ea35, expireMs=1750098832784] policy-pap | [2025-06-16T18:34:35.172+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group defaultGroup policy-pap | [2025-06-16T18:34:35.173+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy onap.restart.tca 1.0.0 to subgroup defaultGroup xacml count=2 policy-pap | [2025-06-16T18:34:35.174+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-16T18:34:35.174+00:00|INFO|SessionData|http-nio-6969-exec-3] add update xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 defaultGroup xacml policies=1 policy-pap | [2025-06-16T18:34:35.175+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group defaultGroup policy-pap | [2025-06-16T18:34:35.215+00:00|INFO|SessionData|http-nio-6969-exec-3] use cached group defaultGroup policy-pap | [2025-06-16T18:34:35.216+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy OSDF_CASABLANCA.Affinity_Default 1.0.0 to subgroup defaultGroup xacml count=3 policy-pap | [2025-06-16T18:34:35.216+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy OSDF_CASABLANCA.Affinity_Default 1.0.0 policy-pap | [2025-06-16T18:34:35.216+00:00|INFO|SessionData|http-nio-6969-exec-3] add update xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 defaultGroup xacml policies=2 policy-pap | [2025-06-16T18:34:35.216+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group defaultGroup policy-pap | [2025-06-16T18:34:35.216+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group defaultGroup policy-pap | [2025-06-16T18:34:35.235+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-16T18:34:35Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=OSDF_CASABLANCA.Affinity_Default 1.0.0, action=DEPLOYMENT, timestamp=2025-06-16T18:34:35Z, user=policyadmin)] policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting listener policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting timer policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|TimerManager|http-nio-6969-exec-3] update timer registered Timer [name=678eb842-8de7-4880-84c1-f110a1ff3c27, expireMs=1750098905268] policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting enqueue policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate started policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=678eb842-8de7-4880-84c1-f110a1ff3c27, expireMs=1750098905268] policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"678eb842-8de7-4880-84c1-f110a1ff3c27","timestampMs":1750098875216,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:34:35.280+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"678eb842-8de7-4880-84c1-f110a1ff3c27","timestampMs":1750098875216,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:34:35.280+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T18:34:35.281+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"678eb842-8de7-4880-84c1-f110a1ff3c27","timestampMs":1750098875216,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:34:35.282+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"678eb842-8de7-4880-84c1-f110a1ff3c27","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"e85dcd01-b32e-47b7-bd0b-30c0aea4d73f","timestampMs":1750098875924,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping enqueue policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping timer policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=678eb842-8de7-4880-84c1-f110a1ff3c27, expireMs=1750098905268] policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping listener policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopped policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"678eb842-8de7-4880-84c1-f110a1ff3c27","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"e85dcd01-b32e-47b7-bd0b-30c0aea4d73f","timestampMs":1750098875924,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:34:35.933+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 678eb842-8de7-4880-84c1-f110a1ff3c27 policy-pap | [2025-06-16T18:34:35.942+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate successful policy-pap | [2025-06-16T18:34:35.942+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 has no more requests policy-pap | [2025-06-16T18:34:35.942+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0},{"policy-type":"onap.policies.optimization.resource.AffinityPolicy","policy-type-version":"1.0.0","policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-16T18:34:59.939+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup policy-pap | [2025-06-16T18:34:59.940+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup defaultGroup xacml count=2 policy-pap | [2025-06-16T18:34:59.940+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-16T18:34:59.940+00:00|INFO|SessionData|http-nio-6969-exec-5] add update xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 defaultGroup xacml policies=0 policy-pap | [2025-06-16T18:34:59.940+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group defaultGroup policy-pap | [2025-06-16T18:34:59.941+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group defaultGroup policy-pap | [2025-06-16T18:34:59.953+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-16T18:34:59Z, user=policyadmin)] policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting listener policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting timer policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|TimerManager|http-nio-6969-exec-5] update timer registered Timer [name=56415037-05c3-4c38-b9fb-020356e71e7c, expireMs=1750098929962] policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting enqueue policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate started policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"56415037-05c3-4c38-b9fb-020356e71e7c","timestampMs":1750098899940,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:34:59.974+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"56415037-05c3-4c38-b9fb-020356e71e7c","timestampMs":1750098899940,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:34:59.974+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"56415037-05c3-4c38-b9fb-020356e71e7c","timestampMs":1750098899940,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:34:59.974+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T18:34:59.974+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T18:34:59.979+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"56415037-05c3-4c38-b9fb-020356e71e7c","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"3270cf9c-3884-4825-aa2b-8edb8611600f","timestampMs":1750098899970,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:34:59.979+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 56415037-05c3-4c38-b9fb-020356e71e7c policy-pap | [2025-06-16T18:34:59.985+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"56415037-05c3-4c38-b9fb-020356e71e7c","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"3270cf9c-3884-4825-aa2b-8edb8611600f","timestampMs":1750098899970,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:34:59.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping policy-pap | [2025-06-16T18:34:59.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping enqueue policy-pap | [2025-06-16T18:34:59.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping timer policy-pap | [2025-06-16T18:34:59.985+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=56415037-05c3-4c38-b9fb-020356e71e7c, expireMs=1750098929962] policy-pap | [2025-06-16T18:34:59.986+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping listener policy-pap | [2025-06-16T18:34:59.986+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopped policy-pap | [2025-06-16T18:34:59.999+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate successful policy-pap | [2025-06-16T18:34:59.999+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 has no more requests policy-pap | [2025-06-16T18:34:59.999+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-16T18:35:05.268+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=678eb842-8de7-4880-84c1-f110a1ff3c27, expireMs=1750098905268] policy-pap | [2025-06-16T18:35:15.435+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-16T18:35:22.775+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"846e2fcb-c890-4d0f-a2c8-5f3e4f1941ca","timestampMs":1750098922765,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:35:22.775+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"846e2fcb-c890-4d0f-a2c8-5f3e4f1941ca","timestampMs":1750098922765,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-16T18:35:22.776+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-xacml-pdp | Waiting for pap port 6969... policy-xacml-pdp | pap (172.17.0.8:6969) open policy-xacml-pdp | Waiting for kafka port 9092... policy-xacml-pdp | kafka (172.17.0.5:9092) open policy-xacml-pdp | + KEYSTORE=/opt/app/policy/pdpx/etc/ssl/policy-keystore policy-xacml-pdp | + TRUSTSTORE=/opt/app/policy/pdpx/etc/ssl/policy-truststore policy-xacml-pdp | + KEYSTORE_PASSWD=Pol1cy_0nap policy-xacml-pdp | + TRUSTSTORE_PASSWD=Pol1cy_0nap policy-xacml-pdp | + '[' 0 -ge 1 ] policy-xacml-pdp | + CONFIG_FILE= policy-xacml-pdp | + '[' -z ] policy-xacml-pdp | + CONFIG_FILE=/opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-truststore ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-keystore ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/xacml.properties ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/logback.xml ] policy-xacml-pdp | + echo 'Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json' policy-xacml-pdp | Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | + /usr/lib/jvm/default-jvm/bin/java -cp '/opt/app/policy/pdpx/etc:/opt/app/policy/pdpx/lib/*' '-Dlogback.configurationFile=/opt/app/policy/pdpx/etc/logback.xml' '-Djavax.net.ssl.keyStore=/opt/app/policy/pdpx/etc/ssl/policy-keystore' '-Djavax.net.ssl.keyStorePassword=Pol1cy_0nap' '-Djavax.net.ssl.trustStore=/opt/app/policy/pdpx/etc/ssl/policy-truststore' '-Djavax.net.ssl.trustStorePassword=Pol1cy_0nap' org.onap.policy.pdpx.main.startstop.Main -c /opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | [2025-06-16T18:33:16.683+00:00|INFO|Main|main] Starting policy xacml pdp service with arguments - [-c, /opt/app/policy/pdpx/etc/defaultConfig.json] policy-xacml-pdp | [2025-06-16T18:33:16.774+00:00|INFO|XacmlPdpActivator|main] Activator initializing using org.onap.policy.pdpx.main.parameters.XacmlPdpParameterGroup@37858383 policy-xacml-pdp | [2025-06-16T18:33:16.816+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-xacml-pdp | allow.auto.create.topics = true policy-xacml-pdp | auto.commit.interval.ms = 5000 policy-xacml-pdp | auto.include.jmx.reporter = true policy-xacml-pdp | auto.offset.reset = latest policy-xacml-pdp | bootstrap.servers = [kafka:9092] policy-xacml-pdp | check.crcs = true policy-xacml-pdp | client.dns.lookup = use_all_dns_ips policy-xacml-pdp | client.id = consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-1 policy-xacml-pdp | client.rack = policy-xacml-pdp | connections.max.idle.ms = 540000 policy-xacml-pdp | default.api.timeout.ms = 60000 policy-xacml-pdp | enable.auto.commit = true policy-xacml-pdp | enable.metrics.push = true policy-xacml-pdp | exclude.internal.topics = true policy-xacml-pdp | fetch.max.bytes = 52428800 policy-xacml-pdp | fetch.max.wait.ms = 500 policy-xacml-pdp | fetch.min.bytes = 1 policy-xacml-pdp | group.id = 183ef33a-1420-47be-a802-23c79d9c9b0a policy-xacml-pdp | group.instance.id = null policy-xacml-pdp | group.protocol = classic policy-xacml-pdp | group.remote.assignor = null policy-xacml-pdp | heartbeat.interval.ms = 3000 policy-xacml-pdp | interceptor.classes = [] policy-xacml-pdp | internal.leave.group.on.close = true policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-xacml-pdp | isolation.level = read_uncommitted policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | max.partition.fetch.bytes = 1048576 policy-xacml-pdp | max.poll.interval.ms = 300000 policy-xacml-pdp | max.poll.records = 500 policy-xacml-pdp | metadata.max.age.ms = 300000 policy-xacml-pdp | metadata.recovery.strategy = none policy-xacml-pdp | metric.reporters = [] policy-xacml-pdp | metrics.num.samples = 2 policy-xacml-pdp | metrics.recording.level = INFO policy-xacml-pdp | metrics.sample.window.ms = 30000 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-xacml-pdp | receive.buffer.bytes = 65536 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 policy-xacml-pdp | reconnect.backoff.ms = 50 policy-xacml-pdp | request.timeout.ms = 30000 policy-xacml-pdp | retry.backoff.max.ms = 1000 policy-xacml-pdp | retry.backoff.ms = 100 policy-xacml-pdp | sasl.client.callback.handler.class = null policy-xacml-pdp | sasl.jaas.config = null policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | sasl.kerberos.service.name = null policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-xacml-pdp | sasl.login.callback.handler.class = null policy-xacml-pdp | sasl.login.class = null policy-xacml-pdp | sasl.login.connect.timeout.ms = null policy-xacml-pdp | sasl.login.read.timeout.ms = null policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 policy-xacml-pdp | sasl.mechanism = GSSAPI policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null policy-xacml-pdp | security.protocol = PLAINTEXT policy-xacml-pdp | security.providers = null policy-xacml-pdp | send.buffer.bytes = 131072 policy-xacml-pdp | session.timeout.ms = 45000 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 policy-xacml-pdp | ssl.cipher.suites = null policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-xacml-pdp | ssl.endpoint.identification.algorithm = https policy-xacml-pdp | ssl.engine.factory.class = null policy-xacml-pdp | ssl.key.password = null policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 policy-xacml-pdp | ssl.keystore.certificate.chain = null policy-xacml-pdp | ssl.keystore.key = null policy-xacml-pdp | ssl.keystore.location = null policy-xacml-pdp | ssl.keystore.password = null policy-xacml-pdp | ssl.keystore.type = JKS policy-xacml-pdp | ssl.protocol = TLSv1.3 policy-xacml-pdp | ssl.provider = null policy-xacml-pdp | ssl.secure.random.implementation = null policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX policy-xacml-pdp | ssl.truststore.certificates = null policy-xacml-pdp | ssl.truststore.location = null policy-xacml-pdp | ssl.truststore.password = null policy-xacml-pdp | ssl.truststore.type = JKS policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | policy-xacml-pdp | [2025-06-16T18:33:16.851+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-xacml-pdp | [2025-06-16T18:33:16.983+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-xacml-pdp | [2025-06-16T18:33:16.983+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-xacml-pdp | [2025-06-16T18:33:16.983+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098796982 policy-xacml-pdp | [2025-06-16T18:33:16.985+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-1, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Subscribed to topic(s): policy-pdp-pap policy-xacml-pdp | [2025-06-16T18:33:17.042+00:00|INFO|XacmlPdpApplicationManager|main] Initialization applications org.onap.policy.pdpx.main.parameters.XacmlApplicationParameters@7ec3394b JerseyClient(name=policyApiParameters, https=false, selfSignedCerts=false, hostname=policy-api, port=6969, basePath=null, userName=policyadmin, password=zb!XztG34, client=org.glassfish.jersey.client.JerseyClient@698122b2, baseUrl=http://policy-api:6969/, alive=true) policy-xacml-pdp | [2025-06-16T18:33:17.053+00:00|INFO|XacmlPdpApplicationManager|main] Application guard supports [onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0] policy-xacml-pdp | [2025-06-16T18:33:17.054+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath guard at this path /opt/app/policy/pdpx/apps/guard policy-xacml-pdp | [2025-06-16T18:33:17.054+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/guard policy-xacml-pdp | [2025-06-16T18:33:17.055+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/guard/xacml.properties policy-xacml-pdp | [2025-06-16T18:33:17.055+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-16T18:33:17.055+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.persistenceunit -> OperationsHistoryPU policy-xacml-pdp | [2025-06-16T18:33:17.055+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.name -> GetOperationOutcome policy-xacml-pdp | [2025-06-16T18:33:17.055+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-16T18:33:17.055+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.description -> Returns operation outcome policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.description -> Returns operation counts based on time window policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.password -> policy_user policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.issuer -> urn:org:onap:xacml:guard:get-operation-outcome policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.persistenceunit -> OperationsHistoryPU policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.driver -> org.postgresql.Driver policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.name -> CountRecentOperations policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.url -> jdbc:postgresql://postgres:5432/operationshistory policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.user -> policy_user policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.issuer -> urn:org:onap:xacml:guard:count-recent-operations policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|XacmlPolicyUtils|main] xacml.pip.engines -> count-recent-operations,get-operation-outcome policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|StdXacmlApplicationServiceProvider|main] {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-16T18:33:17.059+00:00|WARN|XACMLProperties|main] Properties file /usr/lib/jvm/java-17-openjdk/lib/xacml.properties cannot be read. policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPdpApplicationManager|main] Application optimization supports [onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0] policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath optimization at this path /opt/app/policy/pdpx/apps/optimization policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/optimization policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/optimization/xacml.properties policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-16T18:33:17.088+00:00|INFO|XacmlPdpApplicationManager|main] Application naming supports [onap.policies.Naming 1.0.0] policy-xacml-pdp | [2025-06-16T18:33:17.088+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath naming at this path /opt/app/policy/pdpx/apps/naming policy-xacml-pdp | [2025-06-16T18:33:17.088+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/naming policy-xacml-pdp | [2025-06-16T18:33:17.088+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/naming/xacml.properties policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPdpApplicationManager|main] Application native supports [onap.policies.native.Xacml 1.0.0, onap.policies.native.ToscaXacml 1.0.0] policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath native at this path /opt/app/policy/pdpx/apps/native policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/native policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/native/xacml.properties policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-16T18:33:17.094+00:00|INFO|XacmlPdpApplicationManager|main] Application match supports [onap.policies.Match 1.0.0] policy-xacml-pdp | [2025-06-16T18:33:17.094+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath match at this path /opt/app/policy/pdpx/apps/match policy-xacml-pdp | [2025-06-16T18:33:17.094+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/match policy-xacml-pdp | [2025-06-16T18:33:17.094+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/match/xacml.properties policy-xacml-pdp | [2025-06-16T18:33:17.094+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|XacmlPdpApplicationManager|main] Application monitoring supports [onap.Monitoring 1.0.0] policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath monitoring at this path /opt/app/policy/pdpx/apps/monitoring policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/monitoring policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/monitoring/xacml.properties policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPdpApplicationManager|main] Finished applications initialization {optimize=org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplication@2b95e48b, native=org.onap.policy.xacml.pdp.application.nativ.NativePdpApplication@4a3329b9, guard=org.onap.policy.xacml.pdp.application.guard.GuardPdpApplication@3dddefd8, naming=org.onap.policy.xacml.pdp.application.naming.NamingPdpApplication@160ac7fb, match=org.onap.policy.xacml.pdp.application.match.MatchPdpApplication@12bfd80d, configure=org.onap.policy.xacml.pdp.application.monitoring.MonitoringPdpApplication@41925502} policy-xacml-pdp | [2025-06-16T18:33:17.114+00:00|INFO|XacmlPdpHearbeatPublisher|main] heartbeat topic probe 4000ms policy-xacml-pdp | [2025-06-16T18:33:17.299+00:00|INFO|ServiceManager|main] service manager starting policy-xacml-pdp | [2025-06-16T18:33:17.300+00:00|INFO|ServiceManager|main] service manager starting XACML PDP parameters policy-xacml-pdp | [2025-06-16T18:33:17.300+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-xacml-pdp | [2025-06-16T18:33:17.300+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183ef33a-1420-47be-a802-23c79d9c9b0a, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5f574cc2 policy-xacml-pdp | [2025-06-16T18:33:17.312+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183ef33a-1420-47be-a802-23c79d9c9b0a, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-xacml-pdp | [2025-06-16T18:33:17.312+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-xacml-pdp | allow.auto.create.topics = true policy-xacml-pdp | auto.commit.interval.ms = 5000 policy-xacml-pdp | auto.include.jmx.reporter = true policy-xacml-pdp | auto.offset.reset = latest policy-xacml-pdp | bootstrap.servers = [kafka:9092] policy-xacml-pdp | check.crcs = true policy-xacml-pdp | client.dns.lookup = use_all_dns_ips policy-xacml-pdp | client.id = consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2 policy-xacml-pdp | client.rack = policy-xacml-pdp | connections.max.idle.ms = 540000 policy-xacml-pdp | default.api.timeout.ms = 60000 policy-xacml-pdp | enable.auto.commit = true policy-xacml-pdp | enable.metrics.push = true policy-xacml-pdp | exclude.internal.topics = true policy-xacml-pdp | fetch.max.bytes = 52428800 policy-xacml-pdp | fetch.max.wait.ms = 500 policy-xacml-pdp | fetch.min.bytes = 1 policy-xacml-pdp | group.id = 183ef33a-1420-47be-a802-23c79d9c9b0a policy-xacml-pdp | group.instance.id = null policy-xacml-pdp | group.protocol = classic policy-xacml-pdp | group.remote.assignor = null policy-xacml-pdp | heartbeat.interval.ms = 3000 policy-xacml-pdp | interceptor.classes = [] policy-xacml-pdp | internal.leave.group.on.close = true policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-xacml-pdp | isolation.level = read_uncommitted policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | max.partition.fetch.bytes = 1048576 policy-xacml-pdp | max.poll.interval.ms = 300000 policy-xacml-pdp | max.poll.records = 500 policy-xacml-pdp | metadata.max.age.ms = 300000 policy-xacml-pdp | metadata.recovery.strategy = none policy-xacml-pdp | metric.reporters = [] policy-xacml-pdp | metrics.num.samples = 2 policy-xacml-pdp | metrics.recording.level = INFO policy-xacml-pdp | metrics.sample.window.ms = 30000 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-xacml-pdp | receive.buffer.bytes = 65536 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 policy-xacml-pdp | reconnect.backoff.ms = 50 policy-xacml-pdp | request.timeout.ms = 30000 policy-xacml-pdp | retry.backoff.max.ms = 1000 policy-xacml-pdp | retry.backoff.ms = 100 policy-xacml-pdp | sasl.client.callback.handler.class = null policy-xacml-pdp | sasl.jaas.config = null policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | sasl.kerberos.service.name = null policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-xacml-pdp | sasl.login.callback.handler.class = null policy-xacml-pdp | sasl.login.class = null policy-xacml-pdp | sasl.login.connect.timeout.ms = null policy-xacml-pdp | sasl.login.read.timeout.ms = null policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 policy-xacml-pdp | sasl.mechanism = GSSAPI policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null policy-xacml-pdp | security.protocol = PLAINTEXT policy-xacml-pdp | security.providers = null policy-xacml-pdp | send.buffer.bytes = 131072 policy-xacml-pdp | session.timeout.ms = 45000 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 policy-xacml-pdp | ssl.cipher.suites = null policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-xacml-pdp | ssl.endpoint.identification.algorithm = https policy-xacml-pdp | ssl.engine.factory.class = null policy-xacml-pdp | ssl.key.password = null policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 policy-xacml-pdp | ssl.keystore.certificate.chain = null policy-xacml-pdp | ssl.keystore.key = null policy-xacml-pdp | ssl.keystore.location = null policy-xacml-pdp | ssl.keystore.password = null policy-xacml-pdp | ssl.keystore.type = JKS policy-xacml-pdp | ssl.protocol = TLSv1.3 policy-xacml-pdp | ssl.provider = null policy-xacml-pdp | ssl.secure.random.implementation = null policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX policy-xacml-pdp | ssl.truststore.certificates = null policy-xacml-pdp | ssl.truststore.location = null policy-xacml-pdp | ssl.truststore.password = null policy-xacml-pdp | ssl.truststore.type = JKS policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | policy-xacml-pdp | [2025-06-16T18:33:17.313+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-xacml-pdp | [2025-06-16T18:33:17.326+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-xacml-pdp | [2025-06-16T18:33:17.326+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-xacml-pdp | [2025-06-16T18:33:17.326+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098797326 policy-xacml-pdp | [2025-06-16T18:33:17.326+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Subscribed to topic(s): policy-pdp-pap policy-xacml-pdp | [2025-06-16T18:33:17.326+00:00|INFO|ServiceManager|main] service manager starting topics policy-xacml-pdp | [2025-06-16T18:33:17.327+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183ef33a-1420-47be-a802-23c79d9c9b0a, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-xacml-pdp | [2025-06-16T18:33:17.327+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9411ee5f-cd70-4cb0-9055-3ef4a34488c1, alive=false, publisher=null]]: starting policy-xacml-pdp | [2025-06-16T18:33:17.335+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-xacml-pdp | acks = -1 policy-xacml-pdp | auto.include.jmx.reporter = true policy-xacml-pdp | batch.size = 16384 policy-xacml-pdp | bootstrap.servers = [kafka:9092] policy-xacml-pdp | buffer.memory = 33554432 policy-xacml-pdp | client.dns.lookup = use_all_dns_ips policy-xacml-pdp | client.id = producer-1 policy-xacml-pdp | compression.gzip.level = -1 policy-xacml-pdp | compression.lz4.level = 9 policy-xacml-pdp | compression.type = none policy-xacml-pdp | compression.zstd.level = 3 policy-xacml-pdp | connections.max.idle.ms = 540000 policy-xacml-pdp | delivery.timeout.ms = 120000 policy-xacml-pdp | enable.idempotence = true policy-xacml-pdp | enable.metrics.push = true policy-xacml-pdp | interceptor.classes = [] policy-xacml-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-xacml-pdp | linger.ms = 0 policy-xacml-pdp | max.block.ms = 60000 policy-xacml-pdp | max.in.flight.requests.per.connection = 5 policy-xacml-pdp | max.request.size = 1048576 policy-xacml-pdp | metadata.max.age.ms = 300000 policy-xacml-pdp | metadata.max.idle.ms = 300000 policy-xacml-pdp | metadata.recovery.strategy = none policy-xacml-pdp | metric.reporters = [] policy-xacml-pdp | metrics.num.samples = 2 policy-xacml-pdp | metrics.recording.level = INFO policy-xacml-pdp | metrics.sample.window.ms = 30000 policy-xacml-pdp | partitioner.adaptive.partitioning.enable = true policy-xacml-pdp | partitioner.availability.timeout.ms = 0 policy-xacml-pdp | partitioner.class = null policy-xacml-pdp | partitioner.ignore.keys = false policy-xacml-pdp | receive.buffer.bytes = 32768 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 policy-xacml-pdp | reconnect.backoff.ms = 50 policy-xacml-pdp | request.timeout.ms = 30000 policy-xacml-pdp | retries = 2147483647 policy-xacml-pdp | retry.backoff.max.ms = 1000 policy-xacml-pdp | retry.backoff.ms = 100 policy-xacml-pdp | sasl.client.callback.handler.class = null policy-xacml-pdp | sasl.jaas.config = null policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | sasl.kerberos.service.name = null policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-xacml-pdp | sasl.login.callback.handler.class = null policy-xacml-pdp | sasl.login.class = null policy-xacml-pdp | sasl.login.connect.timeout.ms = null policy-xacml-pdp | sasl.login.read.timeout.ms = null policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 policy-xacml-pdp | sasl.mechanism = GSSAPI policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null policy-xacml-pdp | security.protocol = PLAINTEXT policy-xacml-pdp | security.providers = null policy-xacml-pdp | send.buffer.bytes = 131072 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 policy-xacml-pdp | ssl.cipher.suites = null policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-xacml-pdp | ssl.endpoint.identification.algorithm = https policy-xacml-pdp | ssl.engine.factory.class = null policy-xacml-pdp | ssl.key.password = null policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 policy-xacml-pdp | ssl.keystore.certificate.chain = null policy-xacml-pdp | ssl.keystore.key = null policy-xacml-pdp | ssl.keystore.location = null policy-xacml-pdp | ssl.keystore.password = null policy-xacml-pdp | ssl.keystore.type = JKS policy-xacml-pdp | ssl.protocol = TLSv1.3 policy-xacml-pdp | ssl.provider = null policy-xacml-pdp | ssl.secure.random.implementation = null policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX policy-xacml-pdp | ssl.truststore.certificates = null policy-xacml-pdp | ssl.truststore.location = null policy-xacml-pdp | ssl.truststore.password = null policy-xacml-pdp | ssl.truststore.type = JKS policy-xacml-pdp | transaction.timeout.ms = 60000 policy-xacml-pdp | transactional.id = null policy-xacml-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-xacml-pdp | policy-xacml-pdp | [2025-06-16T18:33:17.335+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-xacml-pdp | [2025-06-16T18:33:17.359+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-xacml-pdp | [2025-06-16T18:33:17.389+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-xacml-pdp | [2025-06-16T18:33:17.389+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-xacml-pdp | [2025-06-16T18:33:17.389+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098797389 policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9411ee5f-cd70-4cb0-9055-3ef4a34488c1, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|ServiceManager|main] service manager starting Terminate PDP policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|ServiceManager|main] service manager starting Heartbeat Publisher policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|ServiceManager|main] service manager starting REST Server policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|ServiceManager|main] service manager starting policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-xacml-pdp | [2025-06-16T18:33:17.411+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183ef33a-1420-47be-a802-23c79d9c9b0a, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: registering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007fc2942adb70@357358c2 policy-xacml-pdp | [2025-06-16T18:33:17.412+00:00|INFO|SingleThreadedBusTopicSource|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183ef33a-1420-47be-a802-23c79d9c9b0a, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=2]]]]: register: start not attempted policy-xacml-pdp | [2025-06-16T18:33:17.414+00:00|INFO|OrderedServiceImpl|pool-2-thread-1] ***** OrderedServiceImpl implementers: policy-xacml-pdp | [] policy-xacml-pdp | [2025-06-16T18:33:17.416+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"d31f15e3-8200-426a-9c05-c67231bf3e73","timestampMs":1750098797398,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4"} policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING policy-xacml-pdp | [2025-06-16T18:33:17.420+00:00|INFO|ServiceManager|main] service manager started policy-xacml-pdp | [2025-06-16T18:33:17.421+00:00|INFO|ServiceManager|main] service manager started policy-xacml-pdp | [2025-06-16T18:33:17.421+00:00|INFO|Main|main] Started policy-xacml-pdp service successfully. policy-xacml-pdp | [2025-06-16T18:33:17.425+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN policy-xacml-pdp | [2025-06-16T18:33:17.773+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Cluster ID: DURHhdNSQwy0Fksygi2p2A policy-xacml-pdp | [2025-06-16T18:33:17.774+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: DURHhdNSQwy0Fksygi2p2A policy-xacml-pdp | [2025-06-16T18:33:17.774+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-xacml-pdp | [2025-06-16T18:33:17.775+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-xacml-pdp | [2025-06-16T18:33:17.781+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] (Re-)joining group policy-xacml-pdp | [2025-06-16T18:33:17.798+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Request joining group due to: need to re-join with the given member-id: consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf policy-xacml-pdp | [2025-06-16T18:33:17.799+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] (Re-)joining group policy-xacml-pdp | [2025-06-16T18:33:18.030+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-xacml-pdp | [2025-06-16T18:33:18.031+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-xacml-pdp | [2025-06-16T18:33:20.803+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf', protocol='range'} policy-xacml-pdp | [2025-06-16T18:33:20.813+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Finished assignment for group at generation 1: {consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf=Assignment(partitions=[policy-pdp-pap-0])} policy-xacml-pdp | [2025-06-16T18:33:20.822+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf', protocol='range'} policy-xacml-pdp | [2025-06-16T18:33:20.822+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-xacml-pdp | [2025-06-16T18:33:20.824+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Adding newly assigned partitions: policy-pdp-pap-0 policy-xacml-pdp | [2025-06-16T18:33:20.831+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Found no committed offset for partition policy-pdp-pap-0 policy-xacml-pdp | [2025-06-16T18:33:20.839+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-xacml-pdp | [2025-06-16T18:33:21.884+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"d31f15e3-8200-426a-9c05-c67231bf3e73","timestampMs":1750098797398,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4"} policy-xacml-pdp | [2025-06-16T18:33:21.949+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"d31f15e3-8200-426a-9c05-c67231bf3e73","timestampMs":1750098797398,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4"} policy-xacml-pdp | [2025-06-16T18:33:21.951+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK policy-xacml-pdp | [2025-06-16T18:33:21.951+00:00|INFO|BidirectionalTopicClient|KAFKA-source-policy-pdp-pap] topic policy-pdp-pap is ready; found matching message PdpTopicCheck(super=PdpMessage(messageName=PDP_TOPIC_CHECK, requestId=d31f15e3-8200-426a-9c05-c67231bf3e73, timestampMs=1750098797398, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=null, pdpSubgroup=null)) policy-xacml-pdp | [2025-06-16T18:33:21.957+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183ef33a-1420-47be-a802-23c79d9c9b0a, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=1, locked=false, #topicListeners=2]]]]: unregistering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007fc2942adb70@357358c2 policy-xacml-pdp | [2025-06-16T18:33:21.960+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=421a3372-5f8e-464d-b798-a50b4b48cf6c, timestampMs=1750098801958, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=null), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[], deploymentInstanceInfo=null, properties=null, response=null) policy-xacml-pdp | [2025-06-16T18:33:21.970+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"421a3372-5f8e-464d-b798-a50b4b48cf6c","timestampMs":1750098801958,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup"} policy-xacml-pdp | [2025-06-16T18:33:22.010+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"421a3372-5f8e-464d-b798-a50b4b48cf6c","timestampMs":1750098801958,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup"} policy-xacml-pdp | [2025-06-16T18:33:22.010+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-16T18:33:22.659+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"dbb93529-7620-483d-89b0-797ac3cb8b31","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:33:22.670+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=dbb93529-7620-483d-89b0-797ac3cb8b31, timestampMs=1750098802598, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-df98c171-81af-48a2-b20e-6b7c42a0d39b, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.Naming, typeVersion=1.0.0, properties={policy-instance-name=ONAP_NF_NAMING_TIMESTAMP, naming-models=[{naming-type=VNF, naming-recipe=AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP, name-operation=to_lower_case(), naming-properties=[{property-name=AIC_CLOUD_REGION}, {property-name=CONSTANT, property-value=onap-nf}, {property-name=TIMESTAMP}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VNFC, naming-recipe=VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=ENTIRETY, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}, {property-name=NFC_NAMING_CODE}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VF-MODULE, naming-recipe=VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-value=-, property-name=DELIMITER}, {property-name=VF_MODULE_LABEL}, {property-name=VF_MODULE_TYPE}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=PRECEEDING, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}]}]}))], policiesToBeUndeployed=[]) policy-xacml-pdp | [2025-06-16T18:33:22.678+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP type: onap.policies.Naming weight: null policy: policy-xacml-pdp | {"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}} policy-xacml-pdp | [2025-06-16T18:33:22.736+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.Naming policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.Naming policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 1.0.0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.Naming policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-16T18:33:22.742+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/naming/xacml.properties policy-xacml-pdp | [2025-06-16T18:33:22.749+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP, policy-version=1.0.0} into application naming policy-xacml-pdp | [2025-06-16T18:33:22.750+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"dbb93529-7620-483d-89b0-797ac3cb8b31","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"90b3f482-cbc9-4416-b421-d6129b5f10b4","timestampMs":1750098802750,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:33:22.756+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=504884f8-f384-4692-b040-357f65737559, timestampMs=1750098802756, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) policy-xacml-pdp | [2025-06-16T18:33:22.761+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"504884f8-f384-4692-b040-357f65737559","timestampMs":1750098802756,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:33:22.761+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"dbb93529-7620-483d-89b0-797ac3cb8b31","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"90b3f482-cbc9-4416-b421-d6129b5f10b4","timestampMs":1750098802750,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:33:22.762+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-16T18:33:22.772+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"504884f8-f384-4692-b040-357f65737559","timestampMs":1750098802756,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:33:22.773+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-16T18:33:22.798+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"100c0bdc-0836-4c51-8f89-991d9512ea35","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:33:22.799+00:00|INFO|XacmlPdpStateChangeListener|KAFKA-source-policy-pdp-pap] PDP State Change message has been received from the PAP - PdpStateChange(super=PdpMessage(messageName=PDP_STATE_CHANGE, requestId=100c0bdc-0836-4c51-8f89-991d9512ea35, timestampMs=1750098802598, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-df98c171-81af-48a2-b20e-6b7c42a0d39b, state=ACTIVE) policy-xacml-pdp | [2025-06-16T18:33:22.800+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] set state of org.onap.policy.pdpx.main.XacmlState@76fe1a06 to ACTIVE policy-xacml-pdp | [2025-06-16T18:33:22.800+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] State change: ACTIVE - Starting rest controller policy-xacml-pdp | [2025-06-16T18:33:22.800+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"100c0bdc-0836-4c51-8f89-991d9512ea35","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"39a1e321-3725-4f00-b036-713652cd70c3","timestampMs":1750098802800,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:33:22.810+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"100c0bdc-0836-4c51-8f89-991d9512ea35","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"39a1e321-3725-4f00-b036-713652cd70c3","timestampMs":1750098802800,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:33:22.811+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-16T18:33:23.387+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"10aa937b-f7d1-4c76-92ce-87031228576d","timestampMs":1750098803112,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:33:23.388+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=10aa937b-f7d1-4c76-92ce-87031228576d, timestampMs=1750098803112, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-df98c171-81af-48a2-b20e-6b7c42a0d39b, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[], policiesToBeUndeployed=[]) policy-xacml-pdp | [2025-06-16T18:33:23.388+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"10aa937b-f7d1-4c76-92ce-87031228576d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"05bc3ec8-2c2e-4f60-9242-cc6c3fc1f912","timestampMs":1750098803388,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:33:23.397+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"10aa937b-f7d1-4c76-92ce-87031228576d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"05bc3ec8-2c2e-4f60-9242-cc6c3fc1f912","timestampMs":1750098803388,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:33:23.398+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-16T18:33:35.668+00:00|INFO|RequestLog|qtp2014233765-33] 172.17.0.2 - policyadmin [16/Jun/2025:18:33:35 +0000] "GET /metrics HTTP/1.1" 200 2135 "" "Prometheus/3.4.1" policy-xacml-pdp | [2025-06-16T18:33:43.269+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.1 - - [16/Jun/2025:18:33:43 +0000] "GET / HTTP/1.1" 401 423 "" "curl/7.58.0" policy-xacml-pdp | [2025-06-16T18:34:31.737+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:31 +0000] "GET /policy/pdpx/v1/healthcheck?null HTTP/1.1" 200 110 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-16T18:34:31.752+00:00|INFO|RequestLog|qtp2014233765-28] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:31 +0000] "GET /metrics?null HTTP/1.1" 200 2057 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-16T18:34:33.220+00:00|INFO|GuardTranslator|qtp2014233765-28] Converting Request DecisionRequest(onapName=Guard, onapComponent=Guard-component, onapInstance=Guard-component-instance, requestId=unique-request-guard-1, context=null, action=guard, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={guard={actor=APPC, operation=ModifyConfig, target=f17face5-69cb-4c88-9e0b-7426db7edddd, requestId=c7c6a4aa-bb61-4a15-b831-ba1472dd4a65, clname=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}}) policy-xacml-pdp | [2025-06-16T18:34:33.238+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-dateTime policy-xacml-pdp | [2025-06-16T18:34:33.238+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-date policy-xacml-pdp | [2025-06-16T18:34:33.238+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-time policy-xacml-pdp | [2025-06-16T18:34:33.238+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:timezone policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:vf-count policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-name policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-id policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-type policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.nf-naming-code policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:vserver.vserver-id policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:cloud-region.cloud-region-id policy-xacml-pdp | [2025-06-16T18:34:33.243+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Constructed using properties {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-16T18:34:33.243+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-16T18:34:33.243+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Combining root policies with urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides policy-xacml-pdp | [2025-06-16T18:34:33.249+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Root Policies: 1 policy-xacml-pdp | [2025-06-16T18:34:33.249+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Referenced Policies: 0 policy-xacml-pdp | [2025-06-16T18:34:33.250+00:00|INFO|StdPolicyFinder|qtp2014233765-28] Updating policy map with policy 3bd63012-99d0-49f6-b77a-63bc4920dbc6 version 1.0 policy-xacml-pdp | [2025-06-16T18:34:33.252+00:00|INFO|StdOnapPip|qtp2014233765-28] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-16T18:34:33.330+00:00|INFO|LogHelper|qtp2014233765-28] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] policy-xacml-pdp | [2025-06-16T18:34:33.364+00:00|INFO|Version|qtp2014233765-28] HHH000412: Hibernate ORM core version 6.6.16.Final policy-xacml-pdp | [2025-06-16T18:34:33.386+00:00|INFO|RegionFactoryInitiator|qtp2014233765-28] HHH000026: Second-level cache disabled policy-xacml-pdp | [2025-06-16T18:34:33.514+00:00|WARN|pooling|qtp2014233765-28] HHH10001002: Using built-in connection pool (not intended for production use) policy-xacml-pdp | [2025-06-16T18:34:33.723+00:00|INFO|pooling|qtp2014233765-28] HHH10001005: Database info: policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] policy-xacml-pdp | Database driver: org.postgresql.Driver policy-xacml-pdp | Database version: 16.4 policy-xacml-pdp | Autocommit mode: false policy-xacml-pdp | Isolation level: undefined/unknown policy-xacml-pdp | Minimum pool size: 1 policy-xacml-pdp | Maximum pool size: 20 policy-xacml-pdp | [2025-06-16T18:34:34.591+00:00|INFO|JtaPlatformInitiator|qtp2014233765-28] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-xacml-pdp | [2025-06-16T18:34:34.625+00:00|INFO|StdOnapPip|qtp2014233765-28] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-16T18:34:34.629+00:00|INFO|LogHelper|qtp2014233765-28] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] policy-xacml-pdp | [2025-06-16T18:34:34.631+00:00|INFO|RegionFactoryInitiator|qtp2014233765-28] HHH000026: Second-level cache disabled policy-xacml-pdp | [2025-06-16T18:34:34.648+00:00|WARN|pooling|qtp2014233765-28] HHH10001002: Using built-in connection pool (not intended for production use) policy-xacml-pdp | [2025-06-16T18:34:34.663+00:00|INFO|pooling|qtp2014233765-28] HHH10001005: Database info: policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] policy-xacml-pdp | Database driver: org.postgresql.Driver policy-xacml-pdp | Database version: 16.4 policy-xacml-pdp | Autocommit mode: false policy-xacml-pdp | Isolation level: undefined/unknown policy-xacml-pdp | Minimum pool size: 1 policy-xacml-pdp | Maximum pool size: 20 policy-xacml-pdp | [2025-06-16T18:34:34.692+00:00|INFO|JtaPlatformInitiator|qtp2014233765-28] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-xacml-pdp | [2025-06-16T18:34:34.695+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-28] Elapsed Time: 1456ms policy-xacml-pdp | [2025-06-16T18:34:34.696+00:00|INFO|GuardTranslator|qtp2014233765-28] Converting Response {results=[{decision=NotApplicable,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component-instance}],includeInResults=true}{attributeId=urn:org:onap:guard:request:request-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=unique-request-guard-1}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:guard:clname:clname-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}],includeInResults=true}{attributeId=urn:org:onap:guard:actor:actor-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=APPC}],includeInResults=true}{attributeId=urn:org:onap:guard:operation:operation-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ModifyConfig}],includeInResults=true}{attributeId=urn:org:onap:guard:target:target-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=f17face5-69cb-4c88-9e0b-7426db7edddd}],includeInResults=true}]}]}]} policy-xacml-pdp | [2025-06-16T18:34:34.699+00:00|INFO|RequestLog|qtp2014233765-28] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:33 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 19 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-16T18:34:35.283+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"678eb842-8de7-4880-84c1-f110a1ff3c27","timestampMs":1750098875216,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:34:35.283+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=678eb842-8de7-4880-84c1-f110a1ff3c27, timestampMs=1750098875216, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-df98c171-81af-48a2-b20e-6b7c42a0d39b, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.monitoring.tcagen2, typeVersion=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}})), ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.optimization.resource.AffinityPolicy, typeVersion=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}))], policiesToBeUndeployed=[]) policy-xacml-pdp | [2025-06-16T18:34:35.284+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: onap.restart.tca type: onap.policies.monitoring.tcagen2 weight: null policy: policy-xacml-pdp | {"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}} policy-xacml-pdp | [2025-06-16T18:34:35.319+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.restart.tca policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.monitoring.tcagen2 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.monitoring.tcagen2 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 1.0.0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.restart.tca policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.monitoring.tcagen2 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-16T18:34:35.319+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties policy-xacml-pdp | [2025-06-16T18:34:35.320+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} into application monitoring policy-xacml-pdp | [2025-06-16T18:34:35.320+00:00|INFO|OptimizationPdpApplication|KAFKA-source-policy-pdp-pap] optimization can support onap.policies.optimization.resource.AffinityPolicy 1.0.0 policy-xacml-pdp | [2025-06-16T18:34:35.321+00:00|ERROR|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] PolicyType not found in data area yet /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml policy-xacml-pdp | java.nio.file.NoSuchFileException: /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) policy-xacml-pdp | at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:218) policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:380) policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:432) policy-xacml-pdp | at java.base/java.nio.file.Files.readAllBytes(Files.java:3288) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.loadPolicyType(StdMatchableTranslator.java:515) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.findPolicyType(StdMatchableTranslator.java:480) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.convertPolicy(StdMatchableTranslator.java:241) policy-xacml-pdp | at org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplicationTranslator.convertPolicy(OptimizationPdpApplicationTranslator.java:72) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdXacmlApplicationServiceProvider.loadPolicy(StdXacmlApplicationServiceProvider.java:127) policy-xacml-pdp | at org.onap.policy.pdpx.main.rest.XacmlPdpApplicationManager.loadDeployedPolicy(XacmlPdpApplicationManager.java:199) policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.XacmlPdpUpdatePublisher.handlePdpUpdate(XacmlPdpUpdatePublisher.java:91) policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:72) policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:36) policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.ScoListener.onTopicEvent(ScoListener.java:75) policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher.onTopicEvent(MessageTypeDispatcher.java:97) policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.JsonListener.onTopicEvent(JsonListener.java:61) policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.TopicBase.broadcast(TopicBase.java:170) policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.fetchAllMessages(SingleThreadedBusTopicSource.java:252) policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.run(SingleThreadedBusTopicSource.java:235) policy-xacml-pdp | at java.base/java.lang.Thread.run(Thread.java:840) policy-xacml-pdp | [2025-06-16T18:34:35.349+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls policy-xacml-pdp | [2025-06-16T18:34:35.352+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls policy-xacml-pdp | [2025-06-16T18:34:35.576+00:00|INFO|RequestLog|qtp2014233765-32] 172.17.0.2 - policyadmin [16/Jun/2025:18:34:35 +0000] "GET /metrics HTTP/1.1" 200 2179 "" "Prometheus/3.4.1" policy-xacml-pdp | [2025-06-16T18:34:35.860+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] Successfully pulled onap.policies.optimization.resource.AffinityPolicy 1.0.0 policy-xacml-pdp | [2025-06-16T18:34:35.889+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.resource.AffinityPolicy:1.0.0 policy-xacml-pdp | [2025-06-16T18:34:35.890+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Retrieving datatype policy.data.affinityProperties_properties policy-xacml-pdp | [2025-06-16T18:34:35.890+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.Resource:1.0.0 policy-xacml-pdp | [2025-06-16T18:34:35.890+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.Optimization:1.0.0 policy-xacml-pdp | [2025-06-16T18:34:35.890+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Found root - done scanning policy-xacml-pdp | [2025-06-16T18:34:35.891+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: OSDF_CASABLANCA.Affinity_Default type: onap.policies.optimization.resource.AffinityPolicy weight: 0 policy: policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} policy-xacml-pdp | [2025-06-16T18:34:35.907+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | IF exists and is equal policy-xacml-pdp | policy-xacml-pdp | Does the policy-type attribute exist? policy-xacml-pdp | policy-xacml-pdp | Get the size of policy-type attributes policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Is this policy-type in the list? policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | OSDF_CASABLANCA.Affinity_Default policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-16T18:34:35.923+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | IF exists and is equal policy-xacml-pdp | policy-xacml-pdp | Does the policy-type attribute exist? policy-xacml-pdp | policy-xacml-pdp | Get the size of policy-type attributes policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Is this policy-type in the list? policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | OSDF_CASABLANCA.Affinity_Default policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-16T18:34:35.923+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/optimization/xacml.properties policy-xacml-pdp | [2025-06-16T18:34:35.924+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0} into application optimization policy-xacml-pdp | [2025-06-16T18:34:35.924+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"678eb842-8de7-4880-84c1-f110a1ff3c27","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"e85dcd01-b32e-47b7-bd0b-30c0aea4d73f","timestampMs":1750098875924,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:34:35.938+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"678eb842-8de7-4880-84c1-f110a1ff3c27","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"e85dcd01-b32e-47b7-bd0b-30c0aea4d73f","timestampMs":1750098875924,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:34:35.939+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-16T18:34:59.462+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) policy-xacml-pdp | [2025-06-16T18:34:59.464+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:policy-type policy-xacml-pdp | [2025-06-16T18:34:59.465+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | [2025-06-16T18:34:59.465+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-16T18:34:59.465+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-16T18:34:59.466+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Loading policy file /opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml policy-xacml-pdp | [2025-06-16T18:34:59.483+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Root Policies: 1 policy-xacml-pdp | [2025-06-16T18:34:59.483+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Referenced Policies: 0 policy-xacml-pdp | [2025-06-16T18:34:59.483+00:00|INFO|StdPolicyFinder|qtp2014233765-30] Updating policy map with policy f6cfd002-116f-46f6-a44b-6b5fe64bb918 version 1.0 policy-xacml-pdp | [2025-06-16T18:34:59.483+00:00|INFO|StdPolicyFinder|qtp2014233765-30] Updating policy map with policy onap.restart.tca version 1.0.0 policy-xacml-pdp | [2025-06-16T18:34:59.501+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-30] Elapsed Time: 37ms policy-xacml-pdp | [2025-06-16T18:34:59.501+00:00|INFO|StdBaseTranslator|qtp2014233765-30] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=f6cfd002-116f-46f6-a44b-6b5fe64bb918,version=1.0}]}]} policy-xacml-pdp | [2025-06-16T18:34:59.501+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-16T18:34:59.501+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator policy-xacml-pdp | [2025-06-16T18:34:59.501+00:00|INFO|MonitoringPdpApplication|qtp2014233765-30] Abbreviating decision results DecisionResponse(status=null, message=null, advice=null, obligations=null, policies={onap.restart.tca={type=onap.policies.monitoring.tcagen2, type_version=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}}, name=onap.restart.tca, version=1.0.0, metadata={policy-id=onap.restart.tca, policy-version=1.0.0}}}, attributes=null) policy-xacml-pdp | [2025-06-16T18:34:59.504+00:00|INFO|RequestLog|qtp2014233765-30] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:59 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 146 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-16T18:34:59.521+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) policy-xacml-pdp | [2025-06-16T18:34:59.522+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:policy-type policy-xacml-pdp | [2025-06-16T18:34:59.523+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-30] Elapsed Time: 1ms policy-xacml-pdp | [2025-06-16T18:34:59.523+00:00|INFO|StdBaseTranslator|qtp2014233765-30] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=f6cfd002-116f-46f6-a44b-6b5fe64bb918,version=1.0}]}]} policy-xacml-pdp | [2025-06-16T18:34:59.523+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-16T18:34:59.524+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator policy-xacml-pdp | [2025-06-16T18:34:59.524+00:00|INFO|MonitoringPdpApplication|qtp2014233765-30] Unsupported query param for Monitoring application: {null=[]} policy-xacml-pdp | [2025-06-16T18:34:59.527+00:00|INFO|RequestLog|qtp2014233765-30] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:59 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1055 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-16T18:34:59.542+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-33] Converting Request DecisionRequest(onapName=SDNC, onapComponent=SDNC-component, onapInstance=SDNC-component-instance, requestId=unique-request-sdnc-1, context=null, action=naming, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={nfRole=[], naming-type=[], property-name=[], policy-type=[onap.policies.Naming]}) policy-xacml-pdp | [2025-06-16T18:34:59.543+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:resource:resource-id policy-xacml-pdp | [2025-06-16T18:34:59.543+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | [2025-06-16T18:34:59.543+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-16T18:34:59.543+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-16T18:34:59.544+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Loading policy file /opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml policy-xacml-pdp | [2025-06-16T18:34:59.550+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Root Policies: 1 policy-xacml-pdp | [2025-06-16T18:34:59.550+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Referenced Policies: 0 policy-xacml-pdp | [2025-06-16T18:34:59.550+00:00|INFO|StdPolicyFinder|qtp2014233765-33] Updating policy map with policy d657ae60-0cd3-416f-b2f6-ba07ac03ceaf version 1.0 policy-xacml-pdp | [2025-06-16T18:34:59.550+00:00|INFO|StdPolicyFinder|qtp2014233765-33] Updating policy map with policy SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP version 1.0.0 policy-xacml-pdp | [2025-06-16T18:34:59.552+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-33] Elapsed Time: 9ms policy-xacml-pdp | [2025-06-16T18:34:59.552+00:00|INFO|StdBaseTranslator|qtp2014233765-33] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component-instance}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:policy-type,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}],includeInResults=true}]}],policyIdentifiers=[{id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP,version=1.0.0}],policySetIdentifiers=[{id=d657ae60-0cd3-416f-b2f6-ba07ac03ceaf,version=1.0}]}]} policy-xacml-pdp | [2025-06-16T18:34:59.552+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-33] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-16T18:34:59.552+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-33] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator policy-xacml-pdp | [2025-06-16T18:34:59.556+00:00|INFO|RequestLog|qtp2014233765-33] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:59 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1598 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-16T18:34:59.567+00:00|INFO|StdMatchableTranslator|qtp2014233765-31] Converting Request DecisionRequest(onapName=OOF, onapComponent=OOF-component, onapInstance=OOF-component-instance, requestId=null, context={subscriberName=[]}, action=optimize, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={scope=[], services=[], resources=[], geography=[]}) policy-xacml-pdp | [2025-06-16T18:34:59.569+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-31] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | [2025-06-16T18:34:59.569+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-31] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-16T18:34:59.569+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-31] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-16T18:34:59.570+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-31] Loading policy file /opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml policy-xacml-pdp | [2025-06-16T18:34:59.576+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-31] Root Policies: 1 policy-xacml-pdp | [2025-06-16T18:34:59.576+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-31] Referenced Policies: 0 policy-xacml-pdp | [2025-06-16T18:34:59.576+00:00|INFO|StdPolicyFinder|qtp2014233765-31] Updating policy map with policy 0d949b50-cf16-40f6-9c19-026e2fd2de1a version 1.0 policy-xacml-pdp | [2025-06-16T18:34:59.576+00:00|INFO|StdPolicyFinder|qtp2014233765-31] Updating policy map with policy OSDF_CASABLANCA.Affinity_Default version 1.0.0 policy-xacml-pdp | [2025-06-16T18:34:59.578+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-31] Elapsed Time: 9ms policy-xacml-pdp | [2025-06-16T18:34:59.578+00:00|INFO|StdBaseTranslator|qtp2014233765-31] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OSDF_CASABLANCA.Affinity_Default}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:weight,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#integer,value=0}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.optimization.resource.AffinityPolicy}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component-instance}],includeInResults=true}]}],policyIdentifiers=[{id=OSDF_CASABLANCA.Affinity_Default,version=1.0.0}],policySetIdentifiers=[{id=0d949b50-cf16-40f6-9c19-026e2fd2de1a,version=1.0}]}]} policy-xacml-pdp | [2025-06-16T18:34:59.578+00:00|INFO|StdMatchableTranslator|qtp2014233765-31] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-16T18:34:59.578+00:00|INFO|StdMatchableTranslator|qtp2014233765-31] New entry onap.policies.optimization.resource.AffinityPolicy weight 0 policy-xacml-pdp | [2025-06-16T18:34:59.579+00:00|INFO|StdMatchableTranslator|qtp2014233765-31] Policy (OSDF_CASABLANCA.Affinity_Default,{type=onap.policies.optimization.resource.AffinityPolicy, type_version=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}, name=OSDF_CASABLANCA.Affinity_Default, version=1.0.0, metadata={policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0}}) policy-xacml-pdp | [2025-06-16T18:34:59.580+00:00|INFO|RequestLog|qtp2014233765-31] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:59 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 467 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-16T18:34:59.968+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"56415037-05c3-4c38-b9fb-020356e71e7c","timestampMs":1750098899940,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:34:59.968+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=56415037-05c3-4c38-b9fb-020356e71e7c, timestampMs=1750098899940, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-df98c171-81af-48a2-b20e-6b7c42a0d39b, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[], policiesToBeUndeployed=[onap.restart.tca 1.0.0]) policy-xacml-pdp | [2025-06-16T18:34:59.969+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 policy-xacml-pdp | [2025-06-16T18:34:59.969+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 policy-xacml-pdp | [2025-06-16T18:34:59.969+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 policy-xacml-pdp | [2025-06-16T18:34:59.969+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 policy-xacml-pdp | [2025-06-16T18:34:59.969+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 policy-xacml-pdp | [2025-06-16T18:34:59.969+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties policy-xacml-pdp | [2025-06-16T18:34:59.970+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Unloaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} from application monitoring policy-xacml-pdp | [2025-06-16T18:34:59.971+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"56415037-05c3-4c38-b9fb-020356e71e7c","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"3270cf9c-3884-4825-aa2b-8edb8611600f","timestampMs":1750098899970,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:34:59.976+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"56415037-05c3-4c38-b9fb-020356e71e7c","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"3270cf9c-3884-4825-aa2b-8edb8611600f","timestampMs":1750098899970,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:34:59.976+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-16T18:35:22.765+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=846e2fcb-c890-4d0f-a2c8-5f3e4f1941ca, timestampMs=1750098922765, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=ACTIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0, OSDF_CASABLANCA.Affinity_Default 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) policy-xacml-pdp | [2025-06-16T18:35:22.765+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"846e2fcb-c890-4d0f-a2c8-5f3e4f1941ca","timestampMs":1750098922765,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:35:22.775+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"846e2fcb-c890-4d0f-a2c8-5f3e4f1941ca","timestampMs":1750098922765,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-16T18:35:22.776+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-16T18:35:35.580+00:00|INFO|RequestLog|qtp2014233765-32] 172.17.0.2 - policyadmin [16/Jun/2025:18:35:35 +0000] "GET /metrics HTTP/1.1" 200 2223 "" "Prometheus/3.4.1" postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | waiting for server to start....2025-06-16 18:32:39.305 UTC [49] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-16 18:32:39.307 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-16 18:32:39.311 UTC [52] LOG: database system was shut down at 2025-06-16 18:32:38 UTC postgres | 2025-06-16 18:32:39.316 UTC [49] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | ALTER DATABASE postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | waiting for server to shut down...2025-06-16 18:32:40.835 UTC [49] LOG: received fast shutdown request postgres | .2025-06-16 18:32:40.838 UTC [49] LOG: aborting any active transactions postgres | 2025-06-16 18:32:40.839 UTC [49] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1 postgres | 2025-06-16 18:32:40.840 UTC [50] LOG: shutting down postgres | 2025-06-16 18:32:40.842 UTC [50] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-16 18:32:41.294 UTC [50] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.356 s, sync=0.085 s, total=0.454 s; sync files=1788, longest=0.007 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-16 18:32:41.310 UTC [49] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-16 18:32:41.359 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-16 18:32:41.360 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-16 18:32:41.360 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-16 18:32:41.363 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-16 18:32:41.373 UTC [102] LOG: database system was shut down at 2025-06-16 18:32:41 UTC postgres | 2025-06-16 18:32:41.378 UTC [1] LOG: database system is ready to accept connections prometheus | time=2025-06-16T18:32:41.372Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-16T18:32:41.372Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-16T18:32:41.372Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-16T18:32:41.376Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-16T18:32:41.381Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-16T18:32:41.382Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-16T18:32:41.384Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-16T18:32:41.384Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-16T18:32:41.385Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-16T18:32:41.385Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.31µs prometheus | time=2025-06-16T18:32:41.385Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-16T18:32:41.386Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=173.151µs prometheus | time=2025-06-16T18:32:41.386Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=20.21µs wal_replay_duration=188.601µs wbl_replay_duration=170ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.31µs total_replay_duration=254.861µs prometheus | time=2025-06-16T18:32:41.392Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-16T18:32:41.392Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-16T18:32:41.392Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-16T18:32:41.393Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-16T18:32:41.393Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.47µs remote_storage=2.53µs web_handler=900ns query_engine=1.32µs scrape=194.483µs scrape_sd=146.041µs notify=190.991µs notify_sd=12.76µs rules=1.411µs tracing=19.03µs filename=/etc/prometheus/prometheus.yml totalDuration=1.22195ms prometheus | time=2025-06-16T18:32:41.393Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-16T18:32:41.394Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-16 18:32:39,410] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 18:32:39,412] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 18:32:39,412] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 18:32:39,412] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 18:32:39,412] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 18:32:39,413] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-16 18:32:39,413] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-16 18:32:39,413] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-16 18:32:39,413] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-16 18:32:39,414] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-16 18:32:39,415] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 18:32:39,415] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 18:32:39,415] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 18:32:39,415] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 18:32:39,415] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 18:32:39,415] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-16 18:32:39,428] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-16 18:32:39,430] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-16 18:32:39,430] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-16 18:32:39,432] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-16 18:32:39,444] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,444] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,445] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,445] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,445] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,445] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,445] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,445] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,445] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,445] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,447] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,447] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,447] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,447] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,447] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,447] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,448] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-16 18:32:39,449] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,449] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,452] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-16 18:32:39,452] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-16 18:32:39,453] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 18:32:39,453] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 18:32:39,453] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 18:32:39,453] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 18:32:39,453] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 18:32:39,453] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 18:32:39,455] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,455] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,456] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-16 18:32:39,456] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-16 18:32:39,456] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,477] INFO Logging initialized @385ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-16 18:32:39,533] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-16 18:32:39,533] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-16 18:32:39,547] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-16 18:32:39,576] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-16 18:32:39,576] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-16 18:32:39,577] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-16 18:32:39,580] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-16 18:32:39,588] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-16 18:32:39,597] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-16 18:32:39,597] INFO Started @509ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-16 18:32:39,597] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-16 18:32:39,600] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-16 18:32:39,601] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-16 18:32:39,602] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-16 18:32:39,602] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-16 18:32:39,615] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-16 18:32:39,615] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-16 18:32:39,615] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-16 18:32:39,616] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-16 18:32:39,621] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-16 18:32:39,621] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-16 18:32:39,625] INFO Snapshot loaded in 10 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-16 18:32:39,626] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-16 18:32:39,627] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 18:32:39,633] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-16 18:32:39,634] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-16 18:32:39,646] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-16 18:32:39,647] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-16 18:32:40,589] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container grafana Stopping Container policy-xacml-pdp Stopping Container policy-csit Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-xacml-pdp Stopped Container policy-xacml-pdp Removing Container policy-xacml-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2073 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins633225534144676375.sh ---> sysstat.sh [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins3491583075420671688.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp ']' + mkdir -p /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/archives/ [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins9649094732095286624.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ytGL from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ytGL/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins1853916147953942266.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp@tmp/config14512127637261109072tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins3496005532926683194.sh ---> create-netrc.sh [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins17629765988766509521.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ytGL from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ytGL/bin to PATH [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins15043706426140578330.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins6598883033032786703.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ytGL from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-ytGL/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash -l /tmp/jenkins3295918368526128711.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ytGL from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ytGL/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-xacml-pdp-master-project-csit-xacml-pdp/2012 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-21665 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 890 24274 0 7002 30821 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:b3:5e:05 brd ff:ff:ff:ff:ff:ff inet 10.30.106.152/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85970sec preferred_lft 85970sec inet6 fe80::f816:3eff:feb3:5e05/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:ad:d4:77:75 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:adff:fed4:7775/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21665) 06/16/25 _x86_64_ (8 CPU) 18:30:09 LINUX RESTART (8 CPU) 18:31:02 tps rtps wtps bread/s bwrtn/s 18:32:02 223.85 23.23 200.62 2343.48 54154.28 18:33:01 604.27 7.74 596.53 470.70 179278.63 18:34:01 148.76 0.12 148.64 13.46 41999.27 18:35:01 116.03 0.32 115.71 31.06 40710.55 18:36:01 22.56 0.00 22.56 0.00 23169.47 18:37:01 82.00 1.32 80.69 98.25 24964.91 Average: 198.41 5.42 192.99 490.00 60392.37 18:31:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 18:32:02 26015116 31556400 6924104 21.02 106764 5632724 2261728 6.65 1057272 5410688 3243580 18:33:01 24275944 30600632 8663276 26.30 158088 6272436 6842756 20.13 2217104 5822884 240 18:34:01 22851284 29737400 10087936 30.63 178536 6772156 8203264 24.14 3184952 6215072 20316 18:35:01 22571420 29528708 10367800 31.48 200216 6809328 8705820 25.61 3448284 6219148 532 18:36:01 22620620 29577812 10318600 31.33 200380 6810096 8424480 24.79 3408464 6213188 124 18:37:01 24898460 31598344 8040760 24.41 202144 6545620 1575812 4.64 1448636 5970516 11912 Average: 23872141 30433216 9067079 27.53 174355 6473727 6002310 17.66 2460785 5975249 546117 18:31:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 18:32:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:32:02 ens3 1232.63 752.72 33236.76 63.26 0.00 0.00 0.00 0.00 18:32:02 lo 13.93 13.93 1.31 1.31 0.00 0.00 0.00 0.00 18:33:01 veth01f20ea 0.00 0.19 0.00 0.01 0.00 0.00 0.00 0.00 18:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:01 veth0cd73b5 0.37 0.54 0.03 0.03 0.00 0.00 0.00 0.00 18:33:01 vethf4c97aa 2.59 2.37 0.31 0.30 0.00 0.00 0.00 0.00 18:34:01 veth01f20ea 0.45 0.50 0.05 1.00 0.00 0.00 0.00 0.00 18:34:01 docker0 100.22 135.56 5.35 1053.19 0.00 0.00 0.00 0.00 18:34:01 veth0cd73b5 4.05 5.08 0.65 0.53 0.00 0.00 0.00 0.00 18:34:01 vethf4c97aa 89.32 89.34 15.73 18.33 0.00 0.00 0.00 0.00 18:35:01 veth01f20ea 0.50 0.62 0.05 1.26 0.00 0.00 0.00 0.00 18:35:01 docker0 42.81 62.71 3.69 296.13 0.00 0.00 0.00 0.00 18:35:01 vethe424d93 1.87 1.68 0.67 0.49 0.00 0.00 0.00 0.00 18:35:01 veth0cd73b5 4.67 6.15 0.96 0.72 0.00 0.00 0.00 0.00 18:36:01 veth01f20ea 0.80 0.93 0.09 1.32 0.00 0.00 0.00 0.00 18:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:36:01 veth0cd73b5 3.58 5.03 0.57 0.39 0.00 0.00 0.00 0.00 18:36:01 vethf4c97aa 222.33 221.78 31.57 46.48 0.00 0.00 0.00 0.00 18:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:37:01 ens3 1973.32 1267.64 36510.51 187.94 0.00 0.00 0.00 0.00 18:37:01 lo 26.70 26.70 2.40 2.40 0.00 0.00 0.00 0.00 Average: docker0 23.94 33.19 1.51 225.86 0.00 0.00 0.00 0.00 Average: ens3 268.13 171.50 5945.08 19.55 0.00 0.00 0.00 0.00 Average: lo 3.78 3.78 0.34 0.34 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21665) 06/16/25 _x86_64_ (8 CPU) 18:30:09 LINUX RESTART (8 CPU) 18:31:02 CPU %user %nice %system %iowait %steal %idle 18:32:02 all 15.13 0.00 3.96 3.43 0.05 77.43 18:32:02 0 8.48 0.00 4.03 0.95 0.05 86.50 18:32:02 1 38.81 0.00 4.65 6.38 0.08 50.08 18:32:02 2 11.53 0.00 3.91 4.91 0.05 79.60 18:32:02 3 15.24 0.00 3.94 0.80 0.07 79.95 18:32:02 4 11.09 0.00 3.39 8.13 0.05 77.35 18:32:02 5 8.02 0.00 3.77 2.55 0.03 85.62 18:32:02 6 8.39 0.00 3.25 0.61 0.03 87.72 18:32:02 7 19.42 0.00 4.80 3.14 0.07 72.57 18:33:01 all 17.42 0.00 4.92 11.30 0.06 66.30 18:33:01 0 22.02 0.00 5.20 3.15 0.07 69.56 18:33:01 1 17.20 0.00 4.90 4.24 0.05 73.61 18:33:01 2 17.78 0.00 4.61 3.28 0.05 74.28 18:33:01 3 16.88 0.00 4.58 6.08 0.05 72.41 18:33:01 4 14.02 0.00 3.88 15.12 0.05 66.94 18:33:01 5 17.88 0.00 5.32 7.86 0.07 68.87 18:33:01 6 15.91 0.00 6.14 42.29 0.07 35.58 18:33:01 7 17.66 0.00 4.66 8.56 0.05 69.07 18:34:01 all 19.21 0.00 2.24 1.88 0.07 76.60 18:34:01 0 14.58 0.00 2.08 2.48 0.05 80.82 18:34:01 1 15.48 0.00 2.35 0.18 0.07 81.92 18:34:01 2 22.55 0.00 2.37 0.22 0.07 74.80 18:34:01 3 27.56 0.00 2.28 2.77 0.08 67.31 18:34:01 4 17.03 0.00 1.73 7.37 0.08 73.78 18:34:01 5 19.19 0.00 2.17 0.03 0.05 78.55 18:34:01 6 15.80 0.00 1.82 0.94 0.07 81.37 18:34:01 7 21.51 0.00 3.11 1.05 0.08 74.23 18:35:01 all 9.68 0.00 1.84 2.34 0.06 86.09 18:35:01 0 8.57 0.00 1.34 0.32 0.05 89.72 18:35:01 1 11.55 0.00 2.12 0.12 0.07 86.14 18:35:01 2 14.59 0.00 2.09 1.04 0.07 82.21 18:35:01 3 10.03 0.00 1.97 5.08 0.05 82.87 18:35:01 4 7.75 0.00 2.07 3.24 0.07 86.88 18:35:01 5 8.47 0.00 2.28 0.10 0.05 89.11 18:35:01 6 8.30 0.00 1.41 7.69 0.07 82.53 18:35:01 7 8.17 0.00 1.49 1.14 0.05 89.16 18:36:01 all 0.93 0.00 0.23 0.94 0.04 97.86 18:36:01 0 0.75 0.00 0.15 0.02 0.03 99.05 18:36:01 1 0.73 0.00 0.37 0.02 0.03 98.85 18:36:01 2 0.82 0.00 0.17 0.02 0.05 98.95 18:36:01 3 1.27 0.00 0.18 0.02 0.05 98.48 18:36:01 4 1.12 0.00 0.35 0.13 0.02 98.37 18:36:01 5 0.98 0.00 0.15 0.02 0.03 98.81 18:36:01 6 1.07 0.00 0.30 7.28 0.05 91.30 18:36:01 7 0.67 0.00 0.17 0.00 0.03 99.13 18:37:01 all 5.86 0.00 0.81 1.21 0.03 92.09 18:37:01 0 1.18 0.00 0.95 0.08 0.03 97.75 18:37:01 1 13.99 0.00 0.82 0.17 0.03 85.00 18:37:01 2 1.99 0.00 0.55 0.99 0.02 96.46 18:37:01 3 6.89 0.00 0.99 0.08 0.05 91.99 18:37:01 4 1.58 0.00 0.58 0.60 0.02 97.22 18:37:01 5 3.86 0.00 0.87 0.18 0.03 95.06 18:37:01 6 14.83 0.00 1.13 0.43 0.05 83.55 18:37:01 7 2.64 0.00 0.56 7.10 0.03 89.66 Average: all 11.34 0.00 2.32 3.49 0.05 82.81 Average: 0 9.22 0.00 2.28 1.16 0.05 87.30 Average: 1 16.24 0.00 2.52 1.83 0.06 79.35 Average: 2 11.52 0.00 2.27 1.73 0.05 84.43 Average: 3 12.95 0.00 2.31 2.46 0.06 82.21 Average: 4 8.73 0.00 1.99 5.72 0.05 83.52 Average: 5 9.70 0.00 2.41 1.76 0.04 86.08 Average: 6 10.70 0.00 2.32 9.75 0.06 77.17 Average: 7 11.62 0.00 2.45 3.49 0.05 82.39