18:30:47 Started by timer 18:30:47 Running as SYSTEM 18:30:47 [EnvInject] - Loading node environment variables. 18:30:47 Building remotely on prd-ubuntu1804-docker-8c-8g-21665 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp 18:30:47 [ssh-agent] Looking for ssh-agent implementation... 18:30:47 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 18:30:47 $ ssh-agent 18:30:47 SSH_AUTH_SOCK=/tmp/ssh-uTVy9EDQ0Dkn/agent.2071 18:30:47 SSH_AGENT_PID=2073 18:30:47 [ssh-agent] Started. 18:30:47 Running ssh-add (command line suppressed) 18:30:47 Identity added: /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp@tmp/private_key_15688536647788327738.key (/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp@tmp/private_key_15688536647788327738.key) 18:30:47 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 18:30:47 The recommended git tool is: NONE 18:30:49 using credential onap-jenkins-ssh 18:30:49 Wiping out workspace first. 18:30:49 Cloning the remote Git repository 18:30:49 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 18:30:49 > git init /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp # timeout=10 18:30:49 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 18:30:49 > git --version # timeout=10 18:30:49 > git --version # 'git version 2.17.1' 18:30:49 using GIT_SSH to set credentials Gerrit user 18:30:49 Verifying host key using manually-configured host key entries 18:30:49 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 18:30:49 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 18:30:49 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 18:30:50 Avoid second fetch 18:30:50 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 18:30:50 Checking out Revision 473f78ecac5fb75e5968b31a5bab95eaba72c803 (refs/remotes/origin/master) 18:30:50 > git config core.sparsecheckout # timeout=10 18:30:50 > git checkout -f 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=30 18:30:50 Commit message: "Add Fix fail handling in ACM runtime in CSIT" 18:30:50 > git rev-list --no-walk 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=10 18:30:53 provisioning config files... 18:30:53 copy managed file [npmrc] to file:/home/jenkins/.npmrc 18:30:53 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 18:30:53 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins6239765298903646518.sh 18:30:53 ---> python-tools-install.sh 18:30:53 Setup pyenv: 18:30:54 * system (set by /opt/pyenv/version) 18:30:54 * 3.8.13 (set by /opt/pyenv/version) 18:30:54 * 3.9.13 (set by /opt/pyenv/version) 18:30:54 * 3.10.6 (set by /opt/pyenv/version) 18:30:58 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-ytGL 18:30:58 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 18:31:02 lf-activate-venv(): INFO: Installing: lftools 18:31:26 lf-activate-venv(): INFO: Adding /tmp/venv-ytGL/bin to PATH 18:31:26 Generating Requirements File 18:31:45 Python 3.10.6 18:31:45 pip 25.1.1 from /tmp/venv-ytGL/lib/python3.10/site-packages/pip (python 3.10) 18:31:46 appdirs==1.4.4 18:31:46 argcomplete==3.6.2 18:31:46 aspy.yaml==1.3.0 18:31:46 attrs==25.3.0 18:31:46 autopage==0.5.2 18:31:46 beautifulsoup4==4.13.4 18:31:46 boto3==1.38.36 18:31:46 botocore==1.38.36 18:31:46 bs4==0.0.2 18:31:46 cachetools==5.5.2 18:31:46 certifi==2025.6.15 18:31:46 cffi==1.17.1 18:31:46 cfgv==3.4.0 18:31:46 chardet==5.2.0 18:31:46 charset-normalizer==3.4.2 18:31:46 click==8.2.1 18:31:46 cliff==4.10.0 18:31:46 cmd2==2.6.1 18:31:46 cryptography==3.3.2 18:31:46 debtcollector==3.0.0 18:31:46 decorator==5.2.1 18:31:46 defusedxml==0.7.1 18:31:46 Deprecated==1.2.18 18:31:46 distlib==0.3.9 18:31:46 dnspython==2.7.0 18:31:46 docker==7.1.0 18:31:46 dogpile.cache==1.4.0 18:31:46 durationpy==0.10 18:31:46 email_validator==2.2.0 18:31:46 filelock==3.18.0 18:31:46 future==1.0.0 18:31:46 gitdb==4.0.12 18:31:46 GitPython==3.1.44 18:31:46 google-auth==2.40.3 18:31:46 httplib2==0.22.0 18:31:46 identify==2.6.12 18:31:46 idna==3.10 18:31:46 importlib-resources==1.5.0 18:31:46 iso8601==2.1.0 18:31:46 Jinja2==3.1.6 18:31:46 jmespath==1.0.1 18:31:46 jsonpatch==1.33 18:31:46 jsonpointer==3.0.0 18:31:46 jsonschema==4.24.0 18:31:46 jsonschema-specifications==2025.4.1 18:31:46 keystoneauth1==5.11.1 18:31:46 kubernetes==33.1.0 18:31:46 lftools==0.37.13 18:31:46 lxml==5.4.0 18:31:46 MarkupSafe==3.0.2 18:31:46 msgpack==1.1.1 18:31:46 multi_key_dict==2.0.3 18:31:46 munch==4.0.0 18:31:46 netaddr==1.3.0 18:31:46 niet==1.4.2 18:31:46 nodeenv==1.9.1 18:31:46 oauth2client==4.1.3 18:31:46 oauthlib==3.2.2 18:31:46 openstacksdk==4.6.0 18:31:46 os-client-config==2.1.0 18:31:46 os-service-types==1.7.0 18:31:46 osc-lib==4.0.2 18:31:46 oslo.config==9.8.0 18:31:46 oslo.context==6.0.0 18:31:46 oslo.i18n==6.5.1 18:31:46 oslo.log==7.1.0 18:31:46 oslo.serialization==5.7.0 18:31:46 oslo.utils==9.0.0 18:31:46 packaging==25.0 18:31:46 pbr==6.1.1 18:31:46 platformdirs==4.3.8 18:31:46 prettytable==3.16.0 18:31:46 psutil==7.0.0 18:31:46 pyasn1==0.6.1 18:31:46 pyasn1_modules==0.4.2 18:31:46 pycparser==2.22 18:31:46 pygerrit2==2.0.15 18:31:46 PyGithub==2.6.1 18:31:46 PyJWT==2.10.1 18:31:46 PyNaCl==1.5.0 18:31:46 pyparsing==2.4.7 18:31:46 pyperclip==1.9.0 18:31:46 pyrsistent==0.20.0 18:31:46 python-cinderclient==9.7.0 18:31:46 python-dateutil==2.9.0.post0 18:31:46 python-heatclient==4.2.0 18:31:46 python-jenkins==1.8.2 18:31:46 python-keystoneclient==5.6.0 18:31:46 python-magnumclient==4.8.1 18:31:46 python-openstackclient==8.1.0 18:31:46 python-swiftclient==4.8.0 18:31:46 PyYAML==6.0.2 18:31:46 referencing==0.36.2 18:31:46 requests==2.32.4 18:31:46 requests-oauthlib==2.0.0 18:31:46 requestsexceptions==1.4.0 18:31:46 rfc3986==2.0.0 18:31:46 rpds-py==0.25.1 18:31:46 rsa==4.9.1 18:31:46 ruamel.yaml==0.18.14 18:31:46 ruamel.yaml.clib==0.2.12 18:31:46 s3transfer==0.13.0 18:31:46 simplejson==3.20.1 18:31:46 six==1.17.0 18:31:46 smmap==5.0.2 18:31:46 soupsieve==2.7 18:31:46 stevedore==5.4.1 18:31:46 tabulate==0.9.0 18:31:46 toml==0.10.2 18:31:46 tomlkit==0.13.3 18:31:46 tqdm==4.67.1 18:31:46 typing_extensions==4.14.0 18:31:46 tzdata==2025.2 18:31:46 urllib3==1.26.20 18:31:46 virtualenv==20.31.2 18:31:46 wcwidth==0.2.13 18:31:46 websocket-client==1.8.0 18:31:46 wrapt==1.17.2 18:31:46 xdg==6.0.0 18:31:46 xmltodict==0.14.2 18:31:46 yq==3.4.3 18:31:46 [EnvInject] - Injecting environment variables from a build step. 18:31:46 [EnvInject] - Injecting as environment variables the properties content 18:31:46 SET_JDK_VERSION=openjdk17 18:31:46 GIT_URL="git://cloud.onap.org/mirror" 18:31:46 18:31:46 [EnvInject] - Variables injected successfully. 18:31:46 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/sh /tmp/jenkins15138425634724370426.sh 18:31:46 ---> update-java-alternatives.sh 18:31:46 ---> Updating Java version 18:31:46 ---> Ubuntu/Debian system detected 18:31:46 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 18:31:46 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 18:31:46 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 18:31:46 openjdk version "17.0.4" 2022-07-19 18:31:46 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 18:31:46 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 18:31:46 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 18:31:46 [EnvInject] - Injecting environment variables from a build step. 18:31:46 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 18:31:46 [EnvInject] - Variables injected successfully. 18:31:46 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/sh -xe /tmp/jenkins9890651721019404255.sh 18:31:46 + /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/csit/run-project-csit.sh xacml-pdp 18:31:47 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 18:31:47 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 18:31:47 Configure a credential helper to remove this warning. See 18:31:47 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 18:31:47 18:31:47 Login Succeeded 18:31:47 docker: 'compose' is not a docker command. 18:31:47 See 'docker --help' 18:31:47 Docker Compose Plugin not installed. Installing now... 18:31:47 % Total % Received % Xferd Average Speed Time Time Time Current 18:31:47 Dload Upload Total Spent Left Speed 18:31:47 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 18:31:48 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.2M 100 60.2M 0 0 59.7M 0 0:00:01 0:00:01 --:--:-- 77.5M 18:31:48 Setting project configuration for: xacml-pdp 18:31:48 Configuring docker compose... 18:31:50 Starting xacml-pdp using postgres + Grafana/Prometheus 18:31:50 pap Pulling 18:31:50 kafka Pulling 18:31:50 policy-db-migrator Pulling 18:31:50 grafana Pulling 18:31:50 zookeeper Pulling 18:31:50 xacml-pdp Pulling 18:31:50 api Pulling 18:31:50 prometheus Pulling 18:31:50 postgres Pulling 18:31:50 da9db072f522 Pulling fs layer 18:31:50 96e38c8865ba Pulling fs layer 18:31:50 795b910b71c0 Pulling fs layer 18:31:50 d1bdb495a7aa Pulling fs layer 18:31:50 0444d3911dbb Pulling fs layer 18:31:50 b801adf990e2 Pulling fs layer 18:31:50 d1bdb495a7aa Waiting 18:31:50 b801adf990e2 Waiting 18:31:50 da9db072f522 Pulling fs layer 18:31:50 96e38c8865ba Pulling fs layer 18:31:50 e5d7009d9e55 Pulling fs layer 18:31:50 1ec5fb03eaee Pulling fs layer 18:31:50 d3165a332ae3 Pulling fs layer 18:31:50 c124ba1a8b26 Pulling fs layer 18:31:50 6394804c2196 Pulling fs layer 18:31:50 e5d7009d9e55 Waiting 18:31:50 c124ba1a8b26 Waiting 18:31:50 6394804c2196 Waiting 18:31:50 1ec5fb03eaee Waiting 18:31:50 da9db072f522 Pulling fs layer 18:31:50 96e38c8865ba Pulling fs layer 18:31:50 5e06c6bed798 Pulling fs layer 18:31:50 684be6598fc9 Pulling fs layer 18:31:50 0d92cad902ba Pulling fs layer 18:31:50 dcc0c3b2850c Pulling fs layer 18:31:50 eb7cda286a15 Pulling fs layer 18:31:50 5e06c6bed798 Waiting 18:31:50 684be6598fc9 Waiting 18:31:50 0d92cad902ba Waiting 18:31:50 dcc0c3b2850c Waiting 18:31:50 da9db072f522 Downloading [> ] 48.06kB/3.624MB 18:31:50 da9db072f522 Downloading [> ] 48.06kB/3.624MB 18:31:50 da9db072f522 Downloading [> ] 48.06kB/3.624MB 18:31:50 795b910b71c0 Downloading [> ] 31.67kB/2.323MB 18:31:50 795b910b71c0 Downloading [==================================================>] 2.323MB/2.323MB 18:31:50 795b910b71c0 Verifying Checksum 18:31:50 795b910b71c0 Download complete 18:31:50 da9db072f522 Pulling fs layer 18:31:50 19ede2622bd6 Pulling fs layer 18:31:50 81f92f6326a0 Pulling fs layer 18:31:50 774184111a51 Pulling fs layer 18:31:50 ba3bfa42d232 Pulling fs layer 18:31:50 8e7191d1a9d6 Pulling fs layer 18:31:50 da9db072f522 Downloading [> ] 48.06kB/3.624MB 18:31:50 43449fa9f0bf Pulling fs layer 18:31:50 81f92f6326a0 Waiting 18:31:50 25fd4437207e Pulling fs layer 18:31:50 ba3bfa42d232 Waiting 18:31:50 774184111a51 Waiting 18:31:50 8e7191d1a9d6 Waiting 18:31:50 25fd4437207e Waiting 18:31:50 43449fa9f0bf Waiting 18:31:50 da9db072f522 Download complete 18:31:50 da9db072f522 Download complete 18:31:50 da9db072f522 Download complete 18:31:50 da9db072f522 Download complete 18:31:50 da9db072f522 Extracting [> ] 65.54kB/3.624MB 18:31:50 da9db072f522 Extracting [> ] 65.54kB/3.624MB 18:31:50 da9db072f522 Extracting [> ] 65.54kB/3.624MB 18:31:50 da9db072f522 Extracting [> ] 65.54kB/3.624MB 18:31:50 eca0188f477e Pulling fs layer 18:31:50 e444bcd4d577 Pulling fs layer 18:31:50 eabd8714fec9 Pulling fs layer 18:31:50 45fd2fec8a19 Pulling fs layer 18:31:50 8f10199ed94b Pulling fs layer 18:31:50 f963a77d2726 Pulling fs layer 18:31:50 eca0188f477e Waiting 18:31:50 f3a82e9f1761 Pulling fs layer 18:31:50 79161a3f5362 Pulling fs layer 18:31:50 9c266ba63f51 Pulling fs layer 18:31:50 e444bcd4d577 Waiting 18:31:50 2e8a7df9c2ee Pulling fs layer 18:31:50 eabd8714fec9 Waiting 18:31:50 10f05dd8b1db Pulling fs layer 18:31:50 41dac8b43ba6 Pulling fs layer 18:31:50 71a9f6a9ab4d Pulling fs layer 18:31:50 da3ed5db7103 Pulling fs layer 18:31:50 79161a3f5362 Waiting 18:31:50 c955f6e31a04 Pulling fs layer 18:31:50 9c266ba63f51 Waiting 18:31:50 2e8a7df9c2ee Waiting 18:31:50 71a9f6a9ab4d Waiting 18:31:50 f963a77d2726 Waiting 18:31:50 10f05dd8b1db Waiting 18:31:50 f3a82e9f1761 Waiting 18:31:50 41dac8b43ba6 Waiting 18:31:50 da3ed5db7103 Waiting 18:31:50 c955f6e31a04 Waiting 18:31:50 0444d3911dbb Downloading [==================================================>] 1.2kB/1.2kB 18:31:50 d1bdb495a7aa Downloading [> ] 539.6kB/58.78MB 18:31:50 0444d3911dbb Verifying Checksum 18:31:50 0444d3911dbb Download complete 18:31:50 9fa9226be034 Pulling fs layer 18:31:50 1617e25568b2 Pulling fs layer 18:31:50 6ac0e4adf315 Pulling fs layer 18:31:50 f3b09c502777 Pulling fs layer 18:31:50 408012a7b118 Pulling fs layer 18:31:50 6ac0e4adf315 Waiting 18:31:50 9fa9226be034 Waiting 18:31:50 1617e25568b2 Waiting 18:31:50 44986281b8b9 Pulling fs layer 18:31:50 bf70c5107ab5 Pulling fs layer 18:31:50 1ccde423731d Pulling fs layer 18:31:50 408012a7b118 Waiting 18:31:50 7221d93db8a9 Pulling fs layer 18:31:50 f3b09c502777 Waiting 18:31:50 7df673c7455d Pulling fs layer 18:31:50 1ccde423731d Waiting 18:31:50 bf70c5107ab5 Waiting 18:31:50 7221d93db8a9 Waiting 18:31:50 7df673c7455d Waiting 18:31:50 2d429b9e73a6 Pulling fs layer 18:31:50 46eab5b44a35 Pulling fs layer 18:31:50 c4d302cc468d Pulling fs layer 18:31:50 01e0882c90d9 Pulling fs layer 18:31:50 531ee2cf3c0c Pulling fs layer 18:31:50 c4d302cc468d Waiting 18:31:50 ed54a7dee1d8 Pulling fs layer 18:31:50 12c5c803443f Pulling fs layer 18:31:50 46eab5b44a35 Waiting 18:31:50 e27c75a98748 Pulling fs layer 18:31:50 e73cb4a42719 Pulling fs layer 18:31:50 a83b68436f09 Pulling fs layer 18:31:50 787d6bee9571 Pulling fs layer 18:31:50 13ff0988aaea Pulling fs layer 18:31:50 01e0882c90d9 Waiting 18:31:50 531ee2cf3c0c Waiting 18:31:50 12c5c803443f Waiting 18:31:50 ed54a7dee1d8 Waiting 18:31:50 e27c75a98748 Waiting 18:31:50 e73cb4a42719 Waiting 18:31:50 787d6bee9571 Waiting 18:31:50 4b82842ab819 Pulling fs layer 18:31:50 a83b68436f09 Waiting 18:31:50 13ff0988aaea Waiting 18:31:50 4b82842ab819 Waiting 18:31:50 7e568a0dc8fb Pulling fs layer 18:31:50 2d429b9e73a6 Waiting 18:31:50 7e568a0dc8fb Waiting 18:31:50 b801adf990e2 Downloading [==================================================>] 1.17kB/1.17kB 18:31:50 b801adf990e2 Verifying Checksum 18:31:50 b801adf990e2 Download complete 18:31:50 e5d7009d9e55 Downloading [==================================================>] 295B/295B 18:31:50 e5d7009d9e55 Verifying Checksum 18:31:50 e5d7009d9e55 Download complete 18:31:50 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 18:31:50 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 18:31:50 1ec5fb03eaee Download complete 18:31:50 d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB 18:31:50 d3165a332ae3 Download complete 18:31:50 da9db072f522 Extracting [===================> ] 1.442MB/3.624MB 18:31:50 da9db072f522 Extracting [===================> ] 1.442MB/3.624MB 18:31:50 da9db072f522 Extracting [===================> ] 1.442MB/3.624MB 18:31:50 da9db072f522 Extracting [===================> ] 1.442MB/3.624MB 18:31:50 c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 18:31:50 d1bdb495a7aa Downloading [===========> ] 13.52MB/58.78MB 18:31:50 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 18:31:50 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 18:31:50 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 18:31:50 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 18:31:50 da9db072f522 Pull complete 18:31:50 da9db072f522 Pull complete 18:31:50 da9db072f522 Pull complete 18:31:50 da9db072f522 Pull complete 18:31:50 c124ba1a8b26 Downloading [====> ] 7.568MB/91.87MB 18:31:50 d1bdb495a7aa Downloading [========================> ] 29.2MB/58.78MB 18:31:50 1e017ebebdbd Pulling fs layer 18:31:50 55f2b468da67 Pulling fs layer 18:31:50 1e017ebebdbd Waiting 18:31:50 82bfc142787e Pulling fs layer 18:31:50 46baca71a4ef Pulling fs layer 18:31:50 b0e0ef7895f4 Pulling fs layer 18:31:50 55f2b468da67 Waiting 18:31:50 c0c90eeb8aca Pulling fs layer 18:31:50 5cfb27c10ea5 Pulling fs layer 18:31:50 40a5eed61bb0 Pulling fs layer 18:31:50 82bfc142787e Waiting 18:31:50 46baca71a4ef Waiting 18:31:50 b0e0ef7895f4 Waiting 18:31:50 e040ea11fa10 Pulling fs layer 18:31:50 c0c90eeb8aca Waiting 18:31:50 09d5a3f70313 Pulling fs layer 18:31:50 5cfb27c10ea5 Waiting 18:31:50 356f5c2c843b Pulling fs layer 18:31:50 e040ea11fa10 Waiting 18:31:50 09d5a3f70313 Waiting 18:31:50 40a5eed61bb0 Waiting 18:31:50 f18232174bc9 Pulling fs layer 18:31:50 e60d9caeb0b8 Pulling fs layer 18:31:50 f61a19743345 Pulling fs layer 18:31:50 8af57d8c9f49 Pulling fs layer 18:31:50 c53a11b7c6fc Pulling fs layer 18:31:50 f18232174bc9 Waiting 18:31:50 e032d0a5e409 Pulling fs layer 18:31:50 c49e0ee60bfb Pulling fs layer 18:31:50 384497dbce3b Pulling fs layer 18:31:50 055b9255fa03 Pulling fs layer 18:31:50 f61a19743345 Waiting 18:31:50 b176d7edde70 Pulling fs layer 18:31:50 e60d9caeb0b8 Waiting 18:31:50 8af57d8c9f49 Waiting 18:31:50 384497dbce3b Waiting 18:31:50 b176d7edde70 Waiting 18:31:50 c49e0ee60bfb Waiting 18:31:50 e032d0a5e409 Waiting 18:31:50 c53a11b7c6fc Waiting 18:31:50 055b9255fa03 Waiting 18:31:51 c124ba1a8b26 Downloading [========> ] 16.22MB/91.87MB 18:31:51 d1bdb495a7aa Downloading [=====================================> ] 44.33MB/58.78MB 18:31:51 d1bdb495a7aa Verifying Checksum 18:31:51 d1bdb495a7aa Download complete 18:31:51 c124ba1a8b26 Downloading [==============> ] 26.49MB/91.87MB 18:31:51 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 18:31:51 6394804c2196 Download complete 18:31:51 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 18:31:51 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 18:31:51 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 18:31:51 5e06c6bed798 Downloading [==================================================>] 296B/296B 18:31:51 5e06c6bed798 Download complete 18:31:51 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 18:31:51 684be6598fc9 Download complete 18:31:51 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 18:31:51 0d92cad902ba Verifying Checksum 18:31:51 0d92cad902ba Download complete 18:31:51 c124ba1a8b26 Downloading [======================> ] 40.55MB/91.87MB 18:31:51 96e38c8865ba Downloading [==> ] 3.243MB/71.91MB 18:31:51 96e38c8865ba Downloading [==> ] 3.243MB/71.91MB 18:31:51 96e38c8865ba Downloading [==> ] 3.243MB/71.91MB 18:31:51 dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 18:31:51 c124ba1a8b26 Downloading [==============================> ] 55.69MB/91.87MB 18:31:51 96e38c8865ba Downloading [=======> ] 10.81MB/71.91MB 18:31:51 96e38c8865ba Downloading [=======> ] 10.81MB/71.91MB 18:31:51 96e38c8865ba Downloading [=======> ] 10.81MB/71.91MB 18:31:51 dcc0c3b2850c Downloading [===> ] 5.406MB/76.12MB 18:31:51 c124ba1a8b26 Downloading [=======================================> ] 72.45MB/91.87MB 18:31:51 96e38c8865ba Downloading [==================> ] 26.49MB/71.91MB 18:31:51 96e38c8865ba Downloading [==================> ] 26.49MB/71.91MB 18:31:51 96e38c8865ba Downloading [==================> ] 26.49MB/71.91MB 18:31:51 dcc0c3b2850c Downloading [========> ] 13.52MB/76.12MB 18:31:51 c124ba1a8b26 Downloading [================================================> ] 89.21MB/91.87MB 18:31:51 c124ba1a8b26 Verifying Checksum 18:31:51 c124ba1a8b26 Download complete 18:31:51 eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB 18:31:51 eb7cda286a15 Verifying Checksum 18:31:51 eb7cda286a15 Download complete 18:31:51 96e38c8865ba Downloading [=============================> ] 42.71MB/71.91MB 18:31:51 96e38c8865ba Downloading [=============================> ] 42.71MB/71.91MB 18:31:51 96e38c8865ba Downloading [=============================> ] 42.71MB/71.91MB 18:31:51 dcc0c3b2850c Downloading [==============> ] 21.63MB/76.12MB 18:31:51 19ede2622bd6 Downloading [> ] 539.6kB/71.91MB 18:31:51 96e38c8865ba Downloading [=========================================> ] 60.01MB/71.91MB 18:31:51 96e38c8865ba Downloading [=========================================> ] 60.01MB/71.91MB 18:31:51 96e38c8865ba Downloading [=========================================> ] 60.01MB/71.91MB 18:31:51 dcc0c3b2850c Downloading [====================> ] 31.36MB/76.12MB 18:31:51 19ede2622bd6 Downloading [===> ] 5.406MB/71.91MB 18:31:51 96e38c8865ba Verifying Checksum 18:31:51 96e38c8865ba Download complete 18:31:51 96e38c8865ba Download complete 18:31:51 96e38c8865ba Download complete 18:31:51 dcc0c3b2850c Downloading [=============================> ] 44.87MB/76.12MB 18:31:51 81f92f6326a0 Downloading [> ] 146.4kB/14.63MB 18:31:51 19ede2622bd6 Downloading [=======> ] 10.27MB/71.91MB 18:31:51 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 18:31:51 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 18:31:51 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 18:31:51 dcc0c3b2850c Downloading [========================================> ] 61.64MB/76.12MB 18:31:51 81f92f6326a0 Downloading [==========> ] 2.948MB/14.63MB 18:31:51 19ede2622bd6 Downloading [===============> ] 22.17MB/71.91MB 18:31:51 dcc0c3b2850c Verifying Checksum 18:31:51 dcc0c3b2850c Download complete 18:31:51 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 18:31:51 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 18:31:51 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 18:31:51 774184111a51 Downloading [==================================================>] 1.074kB/1.074kB 18:31:51 774184111a51 Verifying Checksum 18:31:51 774184111a51 Download complete 18:31:51 81f92f6326a0 Downloading [=============================> ] 8.551MB/14.63MB 18:31:51 ba3bfa42d232 Downloading [============================> ] 3.003kB/5.244kB 18:31:51 ba3bfa42d232 Downloading [==================================================>] 5.244kB/5.244kB 18:31:51 ba3bfa42d232 Download complete 18:31:52 8e7191d1a9d6 Downloading [==================================================>] 1.037kB/1.037kB 18:31:52 8e7191d1a9d6 Download complete 18:31:52 19ede2622bd6 Downloading [=========================> ] 36.22MB/71.91MB 18:31:52 43449fa9f0bf Download complete 18:31:52 25fd4437207e Downloading [=======> ] 3.002kB/19.52kB 18:31:52 25fd4437207e Downloading [==================================================>] 19.52kB/19.52kB 18:31:52 25fd4437207e Verifying Checksum 18:31:52 25fd4437207e Download complete 18:31:52 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 18:31:52 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 18:31:52 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 18:31:52 81f92f6326a0 Verifying Checksum 18:31:52 81f92f6326a0 Download complete 18:31:52 e444bcd4d577 Downloading [==================================================>] 279B/279B 18:31:52 e444bcd4d577 Verifying Checksum 18:31:52 e444bcd4d577 Download complete 18:31:52 eca0188f477e Downloading [> ] 375.7kB/37.17MB 18:31:52 eabd8714fec9 Downloading [> ] 539.6kB/375MB 18:31:52 19ede2622bd6 Downloading [=====================================> ] 53.53MB/71.91MB 18:31:52 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB 18:31:52 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB 18:31:52 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB 18:31:52 eca0188f477e Downloading [===========> ] 8.666MB/37.17MB 18:31:52 eabd8714fec9 Downloading [=> ] 9.19MB/375MB 18:31:52 19ede2622bd6 Downloading [=================================================> ] 70.83MB/71.91MB 18:31:52 19ede2622bd6 Verifying Checksum 18:31:52 19ede2622bd6 Download complete 18:31:52 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 18:31:52 45fd2fec8a19 Verifying Checksum 18:31:52 45fd2fec8a19 Download complete 18:31:52 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 18:31:52 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 18:31:52 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 18:31:52 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 18:31:52 eca0188f477e Downloading [============================> ] 21.1MB/37.17MB 18:31:52 eabd8714fec9 Downloading [==> ] 21.09MB/375MB 18:31:52 19ede2622bd6 Extracting [> ] 557.1kB/71.91MB 18:31:52 8f10199ed94b Downloading [====================> ] 3.538MB/8.768MB 18:31:52 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 18:31:52 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 18:31:52 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 18:31:52 eca0188f477e Downloading [=================================================> ] 36.55MB/37.17MB 18:31:52 eca0188f477e Verifying Checksum 18:31:52 eca0188f477e Download complete 18:31:52 f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB 18:31:52 f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB 18:31:52 f963a77d2726 Download complete 18:31:52 eabd8714fec9 Downloading [====> ] 35.14MB/375MB 18:31:52 19ede2622bd6 Extracting [==> ] 3.899MB/71.91MB 18:31:52 f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 18:31:52 8f10199ed94b Verifying Checksum 18:31:52 8f10199ed94b Download complete 18:31:52 96e38c8865ba Extracting [=======================> ] 33.98MB/71.91MB 18:31:52 96e38c8865ba Extracting [=======================> ] 33.98MB/71.91MB 18:31:52 96e38c8865ba Extracting [=======================> ] 33.98MB/71.91MB 18:31:52 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 18:31:52 79161a3f5362 Verifying Checksum 18:31:52 79161a3f5362 Download complete 18:31:52 eca0188f477e Extracting [> ] 393.2kB/37.17MB 18:31:52 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 18:31:52 9c266ba63f51 Verifying Checksum 18:31:52 9c266ba63f51 Download complete 18:31:52 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 18:31:52 2e8a7df9c2ee Verifying Checksum 18:31:52 2e8a7df9c2ee Download complete 18:31:52 eabd8714fec9 Downloading [======> ] 50.82MB/375MB 18:31:52 10f05dd8b1db Downloading [==================================================>] 98B/98B 18:31:52 10f05dd8b1db Verifying Checksum 18:31:52 10f05dd8b1db Download complete 18:31:52 19ede2622bd6 Extracting [=====> ] 7.799MB/71.91MB 18:31:52 41dac8b43ba6 Downloading [==================================================>] 171B/171B 18:31:52 41dac8b43ba6 Verifying Checksum 18:31:52 41dac8b43ba6 Download complete 18:31:52 f3a82e9f1761 Downloading [===> ] 3.21MB/44.41MB 18:31:52 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 18:31:52 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 18:31:52 71a9f6a9ab4d Verifying Checksum 18:31:52 71a9f6a9ab4d Download complete 18:31:52 96e38c8865ba Extracting [===========================> ] 38.99MB/71.91MB 18:31:52 96e38c8865ba Extracting [===========================> ] 38.99MB/71.91MB 18:31:52 96e38c8865ba Extracting [===========================> ] 38.99MB/71.91MB 18:31:52 eca0188f477e Extracting [=======> ] 5.898MB/37.17MB 18:31:52 eabd8714fec9 Downloading [========> ] 65.42MB/375MB 18:31:52 da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 18:31:52 19ede2622bd6 Extracting [========> ] 12.81MB/71.91MB 18:31:52 f3a82e9f1761 Downloading [=======> ] 6.421MB/44.41MB 18:31:52 eca0188f477e Extracting [=============> ] 9.83MB/37.17MB 18:31:52 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 18:31:52 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 18:31:52 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 18:31:52 eabd8714fec9 Downloading [==========> ] 81.64MB/375MB 18:31:52 19ede2622bd6 Extracting [============> ] 17.27MB/71.91MB 18:31:52 f3a82e9f1761 Downloading [==========> ] 9.633MB/44.41MB 18:31:52 da3ed5db7103 Downloading [> ] 2.162MB/127.4MB 18:31:52 eca0188f477e Extracting [===================> ] 14.16MB/37.17MB 18:31:52 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 18:31:52 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 18:31:52 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 18:31:52 eabd8714fec9 Downloading [============> ] 97.32MB/375MB 18:31:52 19ede2622bd6 Extracting [================> ] 23.4MB/71.91MB 18:31:52 f3a82e9f1761 Downloading [=============> ] 12.39MB/44.41MB 18:31:52 eca0188f477e Extracting [========================> ] 18.48MB/37.17MB 18:31:52 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 18:31:52 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 18:31:52 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 18:31:52 eabd8714fec9 Downloading [===============> ] 113MB/375MB 18:31:52 da3ed5db7103 Downloading [=> ] 3.784MB/127.4MB 18:31:52 19ede2622bd6 Extracting [====================> ] 28.97MB/71.91MB 18:31:52 f3a82e9f1761 Downloading [=================> ] 15.14MB/44.41MB 18:31:53 eca0188f477e Extracting [===============================> ] 23.2MB/37.17MB 18:31:53 eabd8714fec9 Downloading [=================> ] 130.3MB/375MB 18:31:53 96e38c8865ba Extracting [=====================================> ] 54.59MB/71.91MB 18:31:53 96e38c8865ba Extracting [=====================================> ] 54.59MB/71.91MB 18:31:53 96e38c8865ba Extracting [=====================================> ] 54.59MB/71.91MB 18:31:53 da3ed5db7103 Downloading [==> ] 5.946MB/127.4MB 18:31:53 f3a82e9f1761 Downloading [====================> ] 18.35MB/44.41MB 18:31:53 19ede2622bd6 Extracting [========================> ] 35.65MB/71.91MB 18:31:53 eca0188f477e Extracting [=====================================> ] 27.92MB/37.17MB 18:31:53 eabd8714fec9 Downloading [===================> ] 148.7MB/375MB 18:31:53 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 18:31:53 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 18:31:53 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 18:31:53 da3ed5db7103 Downloading [===> ] 8.109MB/127.4MB 18:31:53 f3a82e9f1761 Downloading [========================> ] 22.02MB/44.41MB 18:31:53 19ede2622bd6 Extracting [=============================> ] 41.78MB/71.91MB 18:31:53 eabd8714fec9 Downloading [=====================> ] 164.9MB/375MB 18:31:53 eca0188f477e Extracting [===========================================> ] 32.24MB/37.17MB 18:31:53 96e38c8865ba Extracting [===========================================> ] 62.95MB/71.91MB 18:31:53 96e38c8865ba Extracting [===========================================> ] 62.95MB/71.91MB 18:31:53 96e38c8865ba Extracting [===========================================> ] 62.95MB/71.91MB 18:31:53 da3ed5db7103 Downloading [====> ] 10.81MB/127.4MB 18:31:53 f3a82e9f1761 Downloading [=============================> ] 26.15MB/44.41MB 18:31:53 19ede2622bd6 Extracting [==============================> ] 44.56MB/71.91MB 18:31:53 eabd8714fec9 Downloading [========================> ] 180MB/375MB 18:31:53 eca0188f477e Extracting [==============================================> ] 34.6MB/37.17MB 18:31:53 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 18:31:53 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 18:31:53 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 18:31:53 da3ed5db7103 Downloading [=====> ] 12.98MB/127.4MB 18:31:53 f3a82e9f1761 Downloading [==================================> ] 30.28MB/44.41MB 18:31:53 19ede2622bd6 Extracting [==================================> ] 49.02MB/71.91MB 18:31:53 eabd8714fec9 Downloading [=========================> ] 194.1MB/375MB 18:31:53 eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 18:31:53 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 18:31:53 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 18:31:53 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 18:31:53 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 18:31:53 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 18:31:53 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 18:31:53 da3ed5db7103 Downloading [=======> ] 18.38MB/127.4MB 18:31:53 f3a82e9f1761 Downloading [=========================================> ] 37.16MB/44.41MB 18:31:53 19ede2622bd6 Extracting [====================================> ] 52.36MB/71.91MB 18:31:53 eabd8714fec9 Downloading [===========================> ] 209.8MB/375MB 18:31:53 da3ed5db7103 Downloading [========> ] 22.17MB/127.4MB 18:31:53 f3a82e9f1761 Downloading [=================================================> ] 43.58MB/44.41MB 18:31:53 f3a82e9f1761 Verifying Checksum 18:31:53 f3a82e9f1761 Download complete 18:31:53 eca0188f477e Pull complete 18:31:53 96e38c8865ba Pull complete 18:31:53 96e38c8865ba Pull complete 18:31:53 96e38c8865ba Pull complete 18:31:53 e444bcd4d577 Extracting [==================================================>] 279B/279B 18:31:53 e444bcd4d577 Extracting [==================================================>] 279B/279B 18:31:53 19ede2622bd6 Extracting [=======================================> ] 56.26MB/71.91MB 18:31:53 e5d7009d9e55 Extracting [==================================================>] 295B/295B 18:31:53 795b910b71c0 Extracting [> ] 32.77kB/2.323MB 18:31:53 5e06c6bed798 Extracting [==================================================>] 296B/296B 18:31:53 c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB 18:31:53 c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB 18:31:53 c955f6e31a04 Verifying Checksum 18:31:53 c955f6e31a04 Download complete 18:31:53 eabd8714fec9 Downloading [==============================> ] 226.5MB/375MB 18:31:53 5e06c6bed798 Extracting [==================================================>] 296B/296B 18:31:53 e5d7009d9e55 Extracting [==================================================>] 295B/295B 18:31:53 9fa9226be034 Downloading [> ] 15.3kB/783kB 18:31:53 9fa9226be034 Downloading [==================================================>] 783kB/783kB 18:31:53 9fa9226be034 Verifying Checksum 18:31:53 9fa9226be034 Download complete 18:31:53 9fa9226be034 Extracting [==> ] 32.77kB/783kB 18:31:53 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 18:31:53 da3ed5db7103 Downloading [=============> ] 33.52MB/127.4MB 18:31:53 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 18:31:53 1617e25568b2 Download complete 18:31:53 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 18:31:53 eabd8714fec9 Downloading [================================> ] 247.1MB/375MB 18:31:53 19ede2622bd6 Extracting [=========================================> ] 59.6MB/71.91MB 18:31:53 795b910b71c0 Extracting [=========> ] 458.8kB/2.323MB 18:31:53 5e06c6bed798 Pull complete 18:31:53 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 18:31:53 e5d7009d9e55 Pull complete 18:31:53 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 18:31:53 795b910b71c0 Extracting [==================================================>] 2.323MB/2.323MB 18:31:53 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 18:31:53 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 18:31:53 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 18:31:53 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 18:31:53 da3ed5db7103 Downloading [=================> ] 44.87MB/127.4MB 18:31:53 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 18:31:53 9fa9226be034 Extracting [==================================================>] 783kB/783kB 18:31:53 e444bcd4d577 Pull complete 18:31:53 795b910b71c0 Pull complete 18:31:53 6ac0e4adf315 Downloading [===> ] 3.784MB/62.07MB 18:31:53 eabd8714fec9 Downloading [==================================> ] 259.5MB/375MB 18:31:53 19ede2622bd6 Extracting [==========================================> ] 61.83MB/71.91MB 18:31:53 9fa9226be034 Pull complete 18:31:53 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 18:31:53 da3ed5db7103 Downloading [=======================> ] 58.93MB/127.4MB 18:31:53 1ec5fb03eaee Pull complete 18:31:53 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 18:31:53 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 18:31:53 684be6598fc9 Pull complete 18:31:53 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 18:31:53 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 18:31:53 6ac0e4adf315 Downloading [=======> ] 9.731MB/62.07MB 18:31:53 eabd8714fec9 Downloading [====================================> ] 274.7MB/375MB 18:31:53 d1bdb495a7aa Extracting [> ] 557.1kB/58.78MB 18:31:53 19ede2622bd6 Extracting [==============================================> ] 66.29MB/71.91MB 18:31:53 1617e25568b2 Extracting [=====================================> ] 360.4kB/480.9kB 18:31:53 da3ed5db7103 Downloading [=============================> ] 74.61MB/127.4MB 18:31:54 6ac0e4adf315 Downloading [=============> ] 16.76MB/62.07MB 18:31:54 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 18:31:54 eabd8714fec9 Downloading [======================================> ] 288.2MB/375MB 18:31:54 d3165a332ae3 Pull complete 18:31:54 d1bdb495a7aa Extracting [======> ] 7.799MB/58.78MB 18:31:54 19ede2622bd6 Extracting [================================================> ] 70.19MB/71.91MB 18:31:54 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 18:31:54 0d92cad902ba Pull complete 18:31:54 da3ed5db7103 Downloading [==================================> ] 87.59MB/127.4MB 18:31:54 6ac0e4adf315 Downloading [======================> ] 27.57MB/62.07MB 18:31:54 19ede2622bd6 Extracting [==================================================>] 71.91MB/71.91MB 18:31:54 eabd8714fec9 Downloading [========================================> ] 305.5MB/375MB 18:31:54 d1bdb495a7aa Extracting [============> ] 15.04MB/58.78MB 18:31:54 c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 18:31:54 da3ed5db7103 Downloading [=======================================> ] 101.1MB/127.4MB 18:31:54 6ac0e4adf315 Downloading [==============================> ] 38.39MB/62.07MB 18:31:54 1617e25568b2 Pull complete 18:31:54 19ede2622bd6 Pull complete 18:31:54 81f92f6326a0 Extracting [> ] 163.8kB/14.63MB 18:31:54 dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 18:31:54 eabd8714fec9 Downloading [=========================================> ] 313.6MB/375MB 18:31:54 d1bdb495a7aa Extracting [================> ] 19.5MB/58.78MB 18:31:54 da3ed5db7103 Downloading [==============================================> ] 118.9MB/127.4MB 18:31:54 c124ba1a8b26 Extracting [==> ] 4.456MB/91.87MB 18:31:54 6ac0e4adf315 Downloading [========================================> ] 49.74MB/62.07MB 18:31:54 dcc0c3b2850c Extracting [=====> ] 8.913MB/76.12MB 18:31:54 eabd8714fec9 Downloading [============================================> ] 333.1MB/375MB 18:31:54 d1bdb495a7aa Extracting [=======================> ] 27.3MB/58.78MB 18:31:54 da3ed5db7103 Verifying Checksum 18:31:54 da3ed5db7103 Download complete 18:31:54 81f92f6326a0 Extracting [=> ] 327.7kB/14.63MB 18:31:54 f3b09c502777 Downloading [> ] 539.6kB/56.52MB 18:31:54 c124ba1a8b26 Extracting [=======> ] 13.93MB/91.87MB 18:31:54 6ac0e4adf315 Downloading [=================================================> ] 61.09MB/62.07MB 18:31:54 6ac0e4adf315 Download complete 18:31:54 dcc0c3b2850c Extracting [==========> ] 16.71MB/76.12MB 18:31:54 408012a7b118 Downloading [==================================================>] 637B/637B 18:31:54 408012a7b118 Verifying Checksum 18:31:54 408012a7b118 Download complete 18:31:54 eabd8714fec9 Downloading [==============================================> ] 346MB/375MB 18:31:54 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 18:31:54 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 18:31:54 44986281b8b9 Verifying Checksum 18:31:54 44986281b8b9 Download complete 18:31:54 d1bdb495a7aa Extracting [==============================> ] 35.65MB/58.78MB 18:31:54 81f92f6326a0 Extracting [================> ] 4.915MB/14.63MB 18:31:54 bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB 18:31:54 bf70c5107ab5 Verifying Checksum 18:31:54 bf70c5107ab5 Download complete 18:31:54 f3b09c502777 Downloading [======> ] 7.028MB/56.52MB 18:31:54 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 18:31:54 1ccde423731d Download complete 18:31:54 c124ba1a8b26 Extracting [==========> ] 20.05MB/91.87MB 18:31:54 7221d93db8a9 Downloading [==================================================>] 100B/100B 18:31:54 7221d93db8a9 Verifying Checksum 18:31:54 7221d93db8a9 Download complete 18:31:54 7df673c7455d Downloading [==================================================>] 694B/694B 18:31:54 7df673c7455d Verifying Checksum 18:31:54 7df673c7455d Download complete 18:31:54 dcc0c3b2850c Extracting [===============> ] 22.84MB/76.12MB 18:31:54 eabd8714fec9 Downloading [================================================> ] 360.6MB/375MB 18:31:54 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 18:31:54 d1bdb495a7aa Extracting [====================================> ] 43.45MB/58.78MB 18:31:54 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 18:31:54 81f92f6326a0 Extracting [======================> ] 6.717MB/14.63MB 18:31:54 f3b09c502777 Downloading [================> ] 18.38MB/56.52MB 18:31:54 c124ba1a8b26 Extracting [================> ] 30.64MB/91.87MB 18:31:54 dcc0c3b2850c Extracting [===================> ] 29.52MB/76.12MB 18:31:54 eabd8714fec9 Downloading [=================================================> ] 372.5MB/375MB 18:31:54 eabd8714fec9 Verifying Checksum 18:31:54 eabd8714fec9 Download complete 18:31:54 d1bdb495a7aa Extracting [============================================> ] 52.36MB/58.78MB 18:31:54 6ac0e4adf315 Extracting [===> ] 4.456MB/62.07MB 18:31:54 2d429b9e73a6 Downloading [=====> ] 3.243MB/29.13MB 18:31:54 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 18:31:54 46eab5b44a35 Verifying Checksum 18:31:54 46eab5b44a35 Download complete 18:31:54 f3b09c502777 Downloading [=====================> ] 24.33MB/56.52MB 18:31:54 c4d302cc468d Downloading [> ] 48.06kB/4.534MB 18:31:54 c124ba1a8b26 Extracting [====================> ] 37.88MB/91.87MB 18:31:54 81f92f6326a0 Extracting [============================> ] 8.356MB/14.63MB 18:31:54 dcc0c3b2850c Extracting [=======================> ] 35.65MB/76.12MB 18:31:54 d1bdb495a7aa Extracting [==================================================>] 58.78MB/58.78MB 18:31:54 c4d302cc468d Verifying Checksum 18:31:54 c4d302cc468d Download complete 18:31:54 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 18:31:54 6ac0e4adf315 Extracting [=====> ] 7.242MB/62.07MB 18:31:54 f3b09c502777 Downloading [===============================> ] 35.68MB/56.52MB 18:31:54 2d429b9e73a6 Downloading [===================> ] 11.21MB/29.13MB 18:31:54 d1bdb495a7aa Pull complete 18:31:54 0444d3911dbb Extracting [==================================================>] 1.2kB/1.2kB 18:31:54 0444d3911dbb Extracting [==================================================>] 1.2kB/1.2kB 18:31:54 01e0882c90d9 Verifying Checksum 18:31:54 01e0882c90d9 Download complete 18:31:54 eabd8714fec9 Extracting [> ] 557.1kB/375MB 18:31:54 c124ba1a8b26 Extracting [========================> ] 44.56MB/91.87MB 18:31:54 81f92f6326a0 Extracting [====================================> ] 10.81MB/14.63MB 18:31:54 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 18:31:54 dcc0c3b2850c Extracting [===========================> ] 41.78MB/76.12MB 18:31:54 2d429b9e73a6 Downloading [=====================================> ] 21.82MB/29.13MB 18:31:54 f3b09c502777 Downloading [=========================================> ] 47.04MB/56.52MB 18:31:54 6ac0e4adf315 Extracting [=======> ] 9.47MB/62.07MB 18:31:54 c124ba1a8b26 Extracting [===========================> ] 50.69MB/91.87MB 18:31:54 eabd8714fec9 Extracting [=> ] 9.47MB/375MB 18:31:54 81f92f6326a0 Extracting [=========================================> ] 12.29MB/14.63MB 18:31:54 531ee2cf3c0c Downloading [================================> ] 5.324MB/8.066MB 18:31:54 dcc0c3b2850c Extracting [================================> ] 49.02MB/76.12MB 18:31:54 81f92f6326a0 Extracting [==================================================>] 14.63MB/14.63MB 18:31:54 2d429b9e73a6 Verifying Checksum 18:31:54 2d429b9e73a6 Download complete 18:31:55 f3b09c502777 Verifying Checksum 18:31:55 f3b09c502777 Download complete 18:31:55 531ee2cf3c0c Verifying Checksum 18:31:55 531ee2cf3c0c Download complete 18:31:55 ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 18:31:55 0444d3911dbb Pull complete 18:31:55 12c5c803443f Downloading [==================================================>] 116B/116B 18:31:55 12c5c803443f Verifying Checksum 18:31:55 12c5c803443f Download complete 18:31:55 e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB 18:31:55 e27c75a98748 Download complete 18:31:55 b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB 18:31:55 b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB 18:31:55 eabd8714fec9 Extracting [=> ] 14.48MB/375MB 18:31:55 dcc0c3b2850c Extracting [==================================> ] 52.36MB/76.12MB 18:31:55 e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 18:31:55 c124ba1a8b26 Extracting [==============================> ] 55.71MB/91.87MB 18:31:55 81f92f6326a0 Pull complete 18:31:55 6ac0e4adf315 Extracting [========> ] 11.14MB/62.07MB 18:31:55 a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB 18:31:55 a83b68436f09 Download complete 18:31:55 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB 18:31:55 ed54a7dee1d8 Verifying Checksum 18:31:55 ed54a7dee1d8 Download complete 18:31:55 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB 18:31:55 787d6bee9571 Downloading [==================================================>] 127B/127B 18:31:55 787d6bee9571 Verifying Checksum 18:31:55 787d6bee9571 Download complete 18:31:55 13ff0988aaea Downloading [==================================================>] 167B/167B 18:31:55 13ff0988aaea Verifying Checksum 18:31:55 13ff0988aaea Download complete 18:31:55 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 18:31:55 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 18:31:55 7e568a0dc8fb Downloading [==================================================>] 184B/184B 18:31:55 7e568a0dc8fb Verifying Checksum 18:31:55 7e568a0dc8fb Download complete 18:31:55 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 18:31:55 4b82842ab819 Verifying Checksum 18:31:55 4b82842ab819 Download complete 18:31:55 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 18:31:55 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 18:31:55 eabd8714fec9 Extracting [==> ] 20.61MB/375MB 18:31:55 dcc0c3b2850c Extracting [=====================================> ] 56.82MB/76.12MB 18:31:55 e73cb4a42719 Downloading [====> ] 10.27MB/109.1MB 18:31:55 c124ba1a8b26 Extracting [===================================> ] 64.62MB/91.87MB 18:31:55 6ac0e4adf315 Extracting [===========> ] 14.48MB/62.07MB 18:31:55 b801adf990e2 Pull complete 18:31:55 2d429b9e73a6 Extracting [=====> ] 2.949MB/29.13MB 18:31:55 xacml-pdp Pulled 18:31:55 1e017ebebdbd Downloading [======> ] 4.521MB/37.19MB 18:31:55 55f2b468da67 Downloading [=> ] 5.406MB/257.9MB 18:31:55 dcc0c3b2850c Extracting [==========================================> ] 64.06MB/76.12MB 18:31:55 774184111a51 Pull complete 18:31:55 e73cb4a42719 Downloading [==========> ] 22.17MB/109.1MB 18:31:55 c124ba1a8b26 Extracting [=====================================> ] 69.63MB/91.87MB 18:31:55 eabd8714fec9 Extracting [===> ] 22.84MB/375MB 18:31:55 ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB 18:31:55 6ac0e4adf315 Extracting [==============> ] 17.83MB/62.07MB 18:31:55 ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB 18:31:55 2d429b9e73a6 Extracting [==========> ] 6.193MB/29.13MB 18:31:55 1e017ebebdbd Downloading [============> ] 9.043MB/37.19MB 18:31:55 55f2b468da67 Downloading [=> ] 10.27MB/257.9MB 18:31:55 dcc0c3b2850c Extracting [===============================================> ] 72.97MB/76.12MB 18:31:55 e73cb4a42719 Downloading [=================> ] 37.85MB/109.1MB 18:31:55 c124ba1a8b26 Extracting [=========================================> ] 75.76MB/91.87MB 18:31:55 eabd8714fec9 Extracting [===> ] 25.62MB/375MB 18:31:55 6ac0e4adf315 Extracting [==================> ] 23.4MB/62.07MB 18:31:55 dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 18:31:55 2d429b9e73a6 Extracting [===============> ] 9.142MB/29.13MB 18:31:55 1e017ebebdbd Downloading [=================> ] 12.81MB/37.19MB 18:31:55 6ac0e4adf315 Extracting [=====================> ] 26.74MB/62.07MB 18:31:55 55f2b468da67 Downloading [===> ] 15.68MB/257.9MB 18:31:55 e73cb4a42719 Downloading [=====================> ] 46.5MB/109.1MB 18:31:55 eabd8714fec9 Extracting [====> ] 30.64MB/375MB 18:31:55 c124ba1a8b26 Extracting [============================================> ] 81.89MB/91.87MB 18:31:55 1e017ebebdbd Downloading [==================> ] 13.94MB/37.19MB 18:31:55 dcc0c3b2850c Pull complete 18:31:55 ba3bfa42d232 Pull complete 18:31:55 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 18:31:55 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 18:31:55 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB 18:31:55 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB 18:31:55 2d429b9e73a6 Extracting [================> ] 9.732MB/29.13MB 18:31:55 e73cb4a42719 Downloading [=============================> ] 63.26MB/109.1MB 18:31:55 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB 18:31:55 eabd8714fec9 Extracting [====> ] 36.77MB/375MB 18:31:55 55f2b468da67 Downloading [=====> ] 25.95MB/257.9MB 18:31:55 c124ba1a8b26 Extracting [===============================================> ] 87.46MB/91.87MB 18:31:55 1e017ebebdbd Downloading [=============================> ] 21.86MB/37.19MB 18:31:55 2d429b9e73a6 Extracting [=====================> ] 12.68MB/29.13MB 18:31:55 c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 18:31:55 eb7cda286a15 Pull complete 18:31:55 api Pulled 18:31:55 e73cb4a42719 Downloading [===================================> ] 77.32MB/109.1MB 18:31:55 6ac0e4adf315 Extracting [============================> ] 35.09MB/62.07MB 18:31:55 eabd8714fec9 Extracting [======> ] 47.35MB/375MB 18:31:55 55f2b468da67 Downloading [=======> ] 40.01MB/257.9MB 18:31:55 1e017ebebdbd Downloading [=============================================> ] 33.54MB/37.19MB 18:31:55 c124ba1a8b26 Pull complete 18:31:55 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 18:31:55 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 18:31:55 1e017ebebdbd Verifying Checksum 18:31:55 1e017ebebdbd Download complete 18:31:55 8e7191d1a9d6 Pull complete 18:31:55 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 18:31:55 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 18:31:55 2d429b9e73a6 Extracting [==========================> ] 15.34MB/29.13MB 18:31:55 82bfc142787e Downloading [> ] 97.22kB/8.613MB 18:31:55 e73cb4a42719 Downloading [==========================================> ] 93.54MB/109.1MB 18:31:55 eabd8714fec9 Extracting [=======> ] 53.48MB/375MB 18:31:55 6ac0e4adf315 Extracting [=====================================> ] 46.24MB/62.07MB 18:31:55 55f2b468da67 Downloading [===========> ] 57.31MB/257.9MB 18:31:55 6394804c2196 Pull complete 18:31:55 pap Pulled 18:31:55 2d429b9e73a6 Extracting [===============================> ] 18.58MB/29.13MB 18:31:55 82bfc142787e Downloading [=============> ] 2.358MB/8.613MB 18:31:55 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 18:31:55 43449fa9f0bf Pull complete 18:31:55 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB 18:31:55 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB 18:31:55 e73cb4a42719 Downloading [=================================================> ] 107.1MB/109.1MB 18:31:55 eabd8714fec9 Extracting [=======> ] 59.05MB/375MB 18:31:55 e73cb4a42719 Verifying Checksum 18:31:55 55f2b468da67 Downloading [=============> ] 70.83MB/257.9MB 18:31:55 e73cb4a42719 Download complete 18:31:55 6ac0e4adf315 Extracting [=============================================> ] 56.26MB/62.07MB 18:31:56 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 18:31:56 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 18:31:56 46baca71a4ef Verifying Checksum 18:31:56 46baca71a4ef Download complete 18:31:56 2d429b9e73a6 Extracting [====================================> ] 21.23MB/29.13MB 18:31:56 82bfc142787e Downloading [=======================> ] 4.029MB/8.613MB 18:31:56 1e017ebebdbd Extracting [====> ] 3.539MB/37.19MB 18:31:56 b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 18:31:56 eabd8714fec9 Extracting [========> ] 66.29MB/375MB 18:31:56 55f2b468da67 Downloading [================> ] 87.59MB/257.9MB 18:31:56 2d429b9e73a6 Extracting [==========================================> ] 24.48MB/29.13MB 18:31:56 25fd4437207e Pull complete 18:31:56 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 18:31:56 82bfc142787e Downloading [=================================> ] 5.701MB/8.613MB 18:31:56 b0e0ef7895f4 Downloading [=====> ] 4.144MB/37.01MB 18:31:56 55f2b468da67 Downloading [===================> ] 98.4MB/257.9MB 18:31:56 eabd8714fec9 Extracting [=========> ] 71.3MB/375MB 18:31:56 policy-db-migrator Pulled 18:31:56 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 18:31:56 1e017ebebdbd Extracting [========> ] 6.685MB/37.19MB 18:31:56 2d429b9e73a6 Extracting [===========================================> ] 25.07MB/29.13MB 18:31:56 82bfc142787e Downloading [============================================> ] 7.667MB/8.613MB 18:31:56 b0e0ef7895f4 Downloading [===========> ] 8.666MB/37.01MB 18:31:56 55f2b468da67 Downloading [======================> ] 116.2MB/257.9MB 18:31:56 eabd8714fec9 Extracting [==========> ] 78.54MB/375MB 18:31:56 6ac0e4adf315 Pull complete 18:31:56 82bfc142787e Verifying Checksum 18:31:56 82bfc142787e Download complete 18:31:56 1e017ebebdbd Extracting [=============> ] 10.22MB/37.19MB 18:31:56 c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB 18:31:56 c0c90eeb8aca Verifying Checksum 18:31:56 c0c90eeb8aca Download complete 18:31:56 2d429b9e73a6 Extracting [==============================================> ] 27.13MB/29.13MB 18:31:56 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 18:31:56 5cfb27c10ea5 Download complete 18:31:56 40a5eed61bb0 Downloading [==================================================>] 98B/98B 18:31:56 40a5eed61bb0 Verifying Checksum 18:31:56 40a5eed61bb0 Download complete 18:31:56 b0e0ef7895f4 Downloading [===================> ] 14.7MB/37.01MB 18:31:56 55f2b468da67 Downloading [=========================> ] 130.3MB/257.9MB 18:31:56 eabd8714fec9 Extracting [===========> ] 86.9MB/375MB 18:31:56 e040ea11fa10 Downloading [==================================================>] 173B/173B 18:31:56 e040ea11fa10 Verifying Checksum 18:31:56 e040ea11fa10 Download complete 18:31:56 f3b09c502777 Extracting [> ] 557.1kB/56.52MB 18:31:56 1e017ebebdbd Extracting [================> ] 12.58MB/37.19MB 18:31:56 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 18:31:56 b0e0ef7895f4 Downloading [===============================> ] 23.36MB/37.01MB 18:31:56 55f2b468da67 Downloading [============================> ] 145.4MB/257.9MB 18:31:56 eabd8714fec9 Extracting [============> ] 93.59MB/375MB 18:31:56 f3b09c502777 Extracting [===> ] 4.456MB/56.52MB 18:31:56 1e017ebebdbd Extracting [=====================> ] 16.12MB/37.19MB 18:31:56 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 18:31:56 09d5a3f70313 Downloading [===> ] 7.028MB/109.2MB 18:31:56 b0e0ef7895f4 Downloading [===============================================> ] 35.04MB/37.01MB 18:31:56 55f2b468da67 Downloading [==============================> ] 159.5MB/257.9MB 18:31:56 b0e0ef7895f4 Verifying Checksum 18:31:56 b0e0ef7895f4 Download complete 18:31:56 eabd8714fec9 Extracting [=============> ] 101.4MB/375MB 18:31:56 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 18:31:56 f3b09c502777 Extracting [======> ] 7.799MB/56.52MB 18:31:56 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 18:31:56 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 18:31:56 356f5c2c843b Verifying Checksum 18:31:56 1e017ebebdbd Extracting [===========================> ] 20.45MB/37.19MB 18:31:56 356f5c2c843b Download complete 18:31:56 f18232174bc9 Downloading [> ] 48.06kB/3.642MB 18:31:56 09d5a3f70313 Downloading [=======> ] 16.76MB/109.2MB 18:31:56 55f2b468da67 Downloading [==================================> ] 175.7MB/257.9MB 18:31:56 eabd8714fec9 Extracting [==============> ] 105.8MB/375MB 18:31:56 f3b09c502777 Extracting [=========> ] 10.58MB/56.52MB 18:31:56 1e017ebebdbd Extracting [==================================> ] 25.56MB/37.19MB 18:31:56 f18232174bc9 Downloading [======================> ] 1.67MB/3.642MB 18:31:56 09d5a3f70313 Downloading [============> ] 28.11MB/109.2MB 18:31:56 55f2b468da67 Downloading [=====================================> ] 190.9MB/257.9MB 18:31:56 eabd8714fec9 Extracting [==============> ] 109.2MB/375MB 18:31:56 f18232174bc9 Download complete 18:31:56 f3b09c502777 Extracting [===========> ] 12.81MB/56.52MB 18:31:56 f18232174bc9 Extracting [> ] 65.54kB/3.642MB 18:31:56 09d5a3f70313 Downloading [====================> ] 43.79MB/109.2MB 18:31:56 1e017ebebdbd Extracting [======================================> ] 28.7MB/37.19MB 18:31:56 e60d9caeb0b8 Downloading [==================================================>] 140B/140B 18:31:56 e60d9caeb0b8 Verifying Checksum 18:31:56 e60d9caeb0b8 Download complete 18:31:56 55f2b468da67 Downloading [========================================> ] 207.6MB/257.9MB 18:31:56 f61a19743345 Downloading [> ] 48.06kB/3.524MB 18:31:56 eabd8714fec9 Extracting [==============> ] 112MB/375MB 18:31:56 f3b09c502777 Extracting [==============> ] 16.15MB/56.52MB 18:31:56 f61a19743345 Downloading [==================================================>] 3.524MB/3.524MB 18:31:56 f61a19743345 Download complete 18:31:56 f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 18:31:56 09d5a3f70313 Downloading [===========================> ] 59.47MB/109.2MB 18:31:56 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB 18:31:56 1e017ebebdbd Extracting [===========================================> ] 32.24MB/37.19MB 18:31:57 55f2b468da67 Downloading [===========================================> ] 222.2MB/257.9MB 18:31:57 eabd8714fec9 Extracting [===============> ] 114.8MB/375MB 18:31:57 f3b09c502777 Extracting [================> ] 18.38MB/56.52MB 18:31:57 f18232174bc9 Extracting [====================================> ] 2.687MB/3.642MB 18:31:57 09d5a3f70313 Downloading [=================================> ] 72.45MB/109.2MB 18:31:57 8af57d8c9f49 Downloading [=====================================> ] 6.487MB/8.735MB 18:31:57 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 18:31:57 1e017ebebdbd Extracting [=============================================> ] 34.21MB/37.19MB 18:31:57 8af57d8c9f49 Verifying Checksum 18:31:57 8af57d8c9f49 Download complete 18:31:57 55f2b468da67 Downloading [=============================================> ] 236.3MB/257.9MB 18:31:57 c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB 18:31:57 c53a11b7c6fc Downloading [==================================================>] 58.08kB/58.08kB 18:31:57 c53a11b7c6fc Verifying Checksum 18:31:57 c53a11b7c6fc Download complete 18:31:57 eabd8714fec9 Extracting [===============> ] 117.5MB/375MB 18:31:57 e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB 18:31:57 e032d0a5e409 Downloading [==================================================>] 27.77kB/27.77kB 18:31:57 e032d0a5e409 Verifying Checksum 18:31:57 e032d0a5e409 Download complete 18:31:57 f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB 18:31:57 09d5a3f70313 Downloading [=======================================> ] 86.51MB/109.2MB 18:31:57 c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB 18:31:57 55f2b468da67 Downloading [===============================================> ] 244.9MB/257.9MB 18:31:57 1e017ebebdbd Extracting [===============================================> ] 35MB/37.19MB 18:31:57 eabd8714fec9 Extracting [================> ] 120.3MB/375MB 18:31:57 2d429b9e73a6 Pull complete 18:31:57 09d5a3f70313 Downloading [============================================> ] 96.24MB/109.2MB 18:31:57 f3b09c502777 Extracting [====================> ] 23.4MB/56.52MB 18:31:57 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 18:31:57 55f2b468da67 Verifying Checksum 18:31:57 55f2b468da67 Download complete 18:31:57 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 18:31:57 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 18:31:57 f18232174bc9 Pull complete 18:31:57 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 18:31:57 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 18:31:57 c49e0ee60bfb Downloading [==> ] 4.865MB/107.3MB 18:31:57 384497dbce3b Downloading [> ] 539.6kB/63.48MB 18:31:57 eabd8714fec9 Extracting [================> ] 124.8MB/375MB 18:31:57 09d5a3f70313 Downloading [================================================> ] 106MB/109.2MB 18:31:57 1e017ebebdbd Pull complete 18:31:57 f3b09c502777 Extracting [======================> ] 25.62MB/56.52MB 18:31:57 09d5a3f70313 Verifying Checksum 18:31:57 09d5a3f70313 Download complete 18:31:57 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 18:31:57 055b9255fa03 Download complete 18:31:57 b176d7edde70 Downloading [==================================================>] 1.227kB/1.227kB 18:31:57 b176d7edde70 Verifying Checksum 18:31:57 b176d7edde70 Download complete 18:31:57 c49e0ee60bfb Downloading [======> ] 14.06MB/107.3MB 18:31:57 46eab5b44a35 Pull complete 18:31:57 c4d302cc468d Extracting [> ] 65.54kB/4.534MB 18:31:57 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 18:31:57 e60d9caeb0b8 Pull complete 18:31:57 384497dbce3b Downloading [===> ] 4.324MB/63.48MB 18:31:57 f61a19743345 Extracting [> ] 65.54kB/3.524MB 18:31:57 eabd8714fec9 Extracting [=================> ] 129.8MB/375MB 18:31:57 f3b09c502777 Extracting [========================> ] 27.85MB/56.52MB 18:31:57 c49e0ee60bfb Downloading [===========> ] 23.79MB/107.3MB 18:31:57 c4d302cc468d Extracting [==================> ] 1.704MB/4.534MB 18:31:57 55f2b468da67 Extracting [=> ] 8.913MB/257.9MB 18:31:57 384497dbce3b Downloading [=======> ] 9.731MB/63.48MB 18:31:57 eabd8714fec9 Extracting [=================> ] 133.1MB/375MB 18:31:57 f3b09c502777 Extracting [===================================> ] 40.11MB/56.52MB 18:31:57 c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 18:31:57 f61a19743345 Extracting [====> ] 327.7kB/3.524MB 18:31:57 c49e0ee60bfb Downloading [=================> ] 36.76MB/107.3MB 18:31:57 384497dbce3b Downloading [============> ] 15.68MB/63.48MB 18:31:57 c4d302cc468d Pull complete 18:31:57 55f2b468da67 Extracting [===> ] 16.71MB/257.9MB 18:31:57 eabd8714fec9 Extracting [==================> ] 135.9MB/375MB 18:31:57 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 18:31:57 f61a19743345 Extracting [========================> ] 1.704MB/3.524MB 18:31:57 f3b09c502777 Extracting [========================================> ] 45.68MB/56.52MB 18:31:57 c49e0ee60bfb Downloading [====================> ] 44.33MB/107.3MB 18:31:57 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 18:31:57 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 18:31:57 384497dbce3b Downloading [==================> ] 23.79MB/63.48MB 18:31:57 55f2b468da67 Extracting [====> ] 21.17MB/257.9MB 18:31:57 eabd8714fec9 Extracting [==================> ] 139.3MB/375MB 18:31:57 c49e0ee60bfb Downloading [===========================> ] 58.93MB/107.3MB 18:31:57 f3b09c502777 Extracting [================================================> ] 55.15MB/56.52MB 18:31:57 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 18:31:57 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 18:31:57 384497dbce3b Downloading [============================> ] 35.68MB/63.48MB 18:31:58 c49e0ee60bfb Downloading [===============================> ] 68.12MB/107.3MB 18:31:58 384497dbce3b Downloading [=============================> ] 37.85MB/63.48MB 18:31:58 eabd8714fec9 Extracting [===================> ] 142.6MB/375MB 18:31:58 f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB 18:31:58 01e0882c90d9 Pull complete 18:31:58 f61a19743345 Pull complete 18:31:58 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB 18:31:58 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 18:31:58 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 18:31:58 f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 18:31:58 c49e0ee60bfb Downloading [=======================================> ] 83.8MB/107.3MB 18:31:58 384497dbce3b Downloading [========================================> ] 50.82MB/63.48MB 18:31:58 eabd8714fec9 Extracting [===================> ] 145.4MB/375MB 18:31:58 f3b09c502777 Pull complete 18:31:58 408012a7b118 Extracting [==================================================>] 637B/637B 18:31:58 408012a7b118 Extracting [==================================================>] 637B/637B 18:31:58 8af57d8c9f49 Extracting [======> ] 1.081MB/8.735MB 18:31:58 55f2b468da67 Extracting [=====> ] 29.52MB/257.9MB 18:31:58 531ee2cf3c0c Extracting [=====> ] 884.7kB/8.066MB 18:31:58 384497dbce3b Verifying Checksum 18:31:58 384497dbce3b Download complete 18:31:58 c49e0ee60bfb Downloading [==============================================> ] 100.6MB/107.3MB 18:31:58 eabd8714fec9 Extracting [===================> ] 148.2MB/375MB 18:31:58 8af57d8c9f49 Extracting [===========================> ] 4.817MB/8.735MB 18:31:58 55f2b468da67 Extracting [=======> ] 36.77MB/257.9MB 18:31:58 c49e0ee60bfb Verifying Checksum 18:31:58 c49e0ee60bfb Download complete 18:31:58 531ee2cf3c0c Extracting [==========================> ] 4.325MB/8.066MB 18:31:58 408012a7b118 Pull complete 18:31:58 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 18:31:58 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 18:31:58 eabd8714fec9 Extracting [====================> ] 152.1MB/375MB 18:31:58 8af57d8c9f49 Extracting [=============================================> ] 7.963MB/8.735MB 18:31:58 55f2b468da67 Extracting [=========> ] 46.79MB/257.9MB 18:31:58 531ee2cf3c0c Extracting [======================================> ] 6.291MB/8.066MB 18:31:58 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB 18:31:58 8af57d8c9f49 Pull complete 18:31:58 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 18:31:58 c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB 18:31:58 c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB 18:31:58 44986281b8b9 Pull complete 18:31:58 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 18:31:58 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 18:31:58 eabd8714fec9 Extracting [====================> ] 155.4MB/375MB 18:31:58 55f2b468da67 Extracting [===========> ] 59.05MB/257.9MB 18:31:58 531ee2cf3c0c Pull complete 18:31:58 ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 18:31:58 c53a11b7c6fc Pull complete 18:31:58 e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 18:31:58 e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 18:31:58 eabd8714fec9 Extracting [=====================> ] 160.4MB/375MB 18:31:58 55f2b468da67 Extracting [=============> ] 70.75MB/257.9MB 18:31:58 ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 18:31:58 bf70c5107ab5 Pull complete 18:31:58 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 18:31:58 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 18:31:58 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 18:31:58 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 18:31:58 55f2b468da67 Extracting [===============> ] 79.1MB/257.9MB 18:31:58 eabd8714fec9 Extracting [======================> ] 166.6MB/375MB 18:31:58 eabd8714fec9 Extracting [=======================> ] 175.5MB/375MB 18:31:58 55f2b468da67 Extracting [=================> ] 89.69MB/257.9MB 18:31:58 eabd8714fec9 Extracting [========================> ] 186.1MB/375MB 18:31:58 e032d0a5e409 Pull complete 18:31:58 1ccde423731d Pull complete 18:31:58 ed54a7dee1d8 Pull complete 18:31:58 55f2b468da67 Extracting [===================> ] 101.9MB/257.9MB 18:31:59 eabd8714fec9 Extracting [==========================> ] 198.3MB/375MB 18:31:59 55f2b468da67 Extracting [=====================> ] 108.6MB/257.9MB 18:31:59 7221d93db8a9 Extracting [==================================================>] 100B/100B 18:31:59 7221d93db8a9 Extracting [==================================================>] 100B/100B 18:31:59 eabd8714fec9 Extracting [===========================> ] 207.8MB/375MB 18:31:59 55f2b468da67 Extracting [=====================> ] 110.3MB/257.9MB 18:31:59 eabd8714fec9 Extracting [============================> ] 216.7MB/375MB 18:31:59 12c5c803443f Extracting [==================================================>] 116B/116B 18:31:59 12c5c803443f Extracting [==================================================>] 116B/116B 18:31:59 c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB 18:31:59 eabd8714fec9 Extracting [=============================> ] 217.8MB/375MB 18:31:59 55f2b468da67 Extracting [=====================> ] 112.5MB/257.9MB 18:31:59 7221d93db8a9 Pull complete 18:31:59 c49e0ee60bfb Extracting [=> ] 3.899MB/107.3MB 18:31:59 eabd8714fec9 Extracting [=============================> ] 221.2MB/375MB 18:31:59 55f2b468da67 Extracting [======================> ] 114.8MB/257.9MB 18:31:59 7df673c7455d Extracting [==================================================>] 694B/694B 18:31:59 7df673c7455d Extracting [==================================================>] 694B/694B 18:31:59 c49e0ee60bfb Extracting [==> ] 6.128MB/107.3MB 18:31:59 55f2b468da67 Extracting [=======================> ] 119.2MB/257.9MB 18:31:59 eabd8714fec9 Extracting [=============================> ] 224.5MB/375MB 18:31:59 12c5c803443f Pull complete 18:31:59 c49e0ee60bfb Extracting [===> ] 7.242MB/107.3MB 18:31:59 eabd8714fec9 Extracting [==============================> ] 226.2MB/375MB 18:31:59 55f2b468da67 Extracting [=======================> ] 120.3MB/257.9MB 18:32:00 c49e0ee60bfb Extracting [=====> ] 12.26MB/107.3MB 18:32:00 eabd8714fec9 Extracting [==============================> ] 232.3MB/375MB 18:32:00 55f2b468da67 Extracting [========================> ] 124.8MB/257.9MB 18:32:00 c49e0ee60bfb Extracting [=======> ] 16.15MB/107.3MB 18:32:00 eabd8714fec9 Extracting [===============================> ] 237.9MB/375MB 18:32:00 55f2b468da67 Extracting [=========================> ] 129.8MB/257.9MB 18:32:00 eabd8714fec9 Extracting [================================> ] 242.3MB/375MB 18:32:00 c49e0ee60bfb Extracting [========> ] 17.83MB/107.3MB 18:32:00 55f2b468da67 Extracting [==========================> ] 134.3MB/257.9MB 18:32:00 c49e0ee60bfb Extracting [========> ] 18.38MB/107.3MB 18:32:00 55f2b468da67 Extracting [==========================> ] 135.4MB/257.9MB 18:32:00 eabd8714fec9 Extracting [================================> ] 244.5MB/375MB 18:32:00 c49e0ee60bfb Extracting [===========> ] 25.62MB/107.3MB 18:32:00 55f2b468da67 Extracting [===========================> ] 140.9MB/257.9MB 18:32:00 eabd8714fec9 Extracting [=================================> ] 247.9MB/375MB 18:32:00 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 18:32:00 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 18:32:00 c49e0ee60bfb Extracting [===============> ] 32.31MB/107.3MB 18:32:00 55f2b468da67 Extracting [============================> ] 144.8MB/257.9MB 18:32:00 eabd8714fec9 Extracting [=================================> ] 251.8MB/375MB 18:32:00 c49e0ee60bfb Extracting [================> ] 36.21MB/107.3MB 18:32:00 eabd8714fec9 Extracting [==================================> ] 256.2MB/375MB 18:32:00 55f2b468da67 Extracting [============================> ] 148.7MB/257.9MB 18:32:00 c49e0ee60bfb Extracting [==================> ] 38.99MB/107.3MB 18:32:01 55f2b468da67 Extracting [=============================> ] 152.6MB/257.9MB 18:32:01 eabd8714fec9 Extracting [===================================> ] 263.5MB/375MB 18:32:01 c49e0ee60bfb Extracting [====================> ] 44.56MB/107.3MB 18:32:01 55f2b468da67 Extracting [==============================> ] 156MB/257.9MB 18:32:01 eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB 18:32:01 c49e0ee60bfb Extracting [======================> ] 47.91MB/107.3MB 18:32:01 55f2b468da67 Extracting [==============================> ] 158.8MB/257.9MB 18:32:01 eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB 18:32:01 c49e0ee60bfb Extracting [========================> ] 53.48MB/107.3MB 18:32:01 55f2b468da67 Extracting [===============================> ] 163.8MB/257.9MB 18:32:01 eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB 18:32:01 c49e0ee60bfb Extracting [===========================> ] 58.49MB/107.3MB 18:32:01 55f2b468da67 Extracting [================================> ] 168.2MB/257.9MB 18:32:01 eabd8714fec9 Extracting [====================================> ] 273MB/375MB 18:32:01 7df673c7455d Pull complete 18:32:01 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 18:32:01 c49e0ee60bfb Extracting [=============================> ] 63.5MB/107.3MB 18:32:02 c49e0ee60bfb Extracting [==============================> ] 65.18MB/107.3MB 18:32:02 eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB 18:32:02 c49e0ee60bfb Extracting [==============================> ] 66.29MB/107.3MB 18:32:02 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB 18:32:02 c49e0ee60bfb Extracting [================================> ] 70.19MB/107.3MB 18:32:02 e27c75a98748 Pull complete 18:32:02 eabd8714fec9 Extracting [====================================> ] 275.2MB/375MB 18:32:02 55f2b468da67 Extracting [=================================> ] 171.6MB/257.9MB 18:32:02 c49e0ee60bfb Extracting [==================================> ] 73.53MB/107.3MB 18:32:02 eabd8714fec9 Extracting [=====================================> ] 279.1MB/375MB 18:32:02 55f2b468da67 Extracting [=================================> ] 173.2MB/257.9MB 18:32:02 c49e0ee60bfb Extracting [===================================> ] 76.87MB/107.3MB 18:32:02 eabd8714fec9 Extracting [=====================================> ] 284.7MB/375MB 18:32:02 55f2b468da67 Extracting [=================================> ] 174.9MB/257.9MB 18:32:02 c49e0ee60bfb Extracting [=====================================> ] 79.66MB/107.3MB 18:32:02 eabd8714fec9 Extracting [======================================> ] 287.4MB/375MB 18:32:02 55f2b468da67 Extracting [==================================> ] 176MB/257.9MB 18:32:02 c49e0ee60bfb Extracting [======================================> ] 83.56MB/107.3MB 18:32:02 e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 18:32:02 eabd8714fec9 Extracting [======================================> ] 292.5MB/375MB 18:32:03 55f2b468da67 Extracting [==================================> ] 178.8MB/257.9MB 18:32:03 c49e0ee60bfb Extracting [========================================> ] 87.46MB/107.3MB 18:32:03 e73cb4a42719 Extracting [=> ] 3.899MB/109.1MB 18:32:03 eabd8714fec9 Extracting [=======================================> ] 295.2MB/375MB 18:32:03 55f2b468da67 Extracting [===================================> ] 182.2MB/257.9MB 18:32:03 c49e0ee60bfb Extracting [============================================> ] 95.26MB/107.3MB 18:32:03 e73cb4a42719 Extracting [===> ] 7.799MB/109.1MB 18:32:03 eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 18:32:03 55f2b468da67 Extracting [====================================> ] 186.1MB/257.9MB 18:32:03 e73cb4a42719 Extracting [=====> ] 11.7MB/109.1MB 18:32:03 c49e0ee60bfb Extracting [==============================================> ] 100.3MB/107.3MB 18:32:03 eabd8714fec9 Extracting [=======================================> ] 298.6MB/375MB 18:32:03 55f2b468da67 Extracting [====================================> ] 190.5MB/257.9MB 18:32:03 e73cb4a42719 Extracting [=======> ] 16.15MB/109.1MB 18:32:03 c49e0ee60bfb Extracting [================================================> ] 103.6MB/107.3MB 18:32:03 eabd8714fec9 Extracting [========================================> ] 300.3MB/375MB 18:32:03 55f2b468da67 Extracting [=====================================> ] 193.9MB/257.9MB 18:32:03 e73cb4a42719 Extracting [========> ] 18.94MB/109.1MB 18:32:03 eabd8714fec9 Extracting [========================================> ] 302.5MB/375MB 18:32:03 c49e0ee60bfb Extracting [================================================> ] 104.7MB/107.3MB 18:32:03 e73cb4a42719 Extracting [==========> ] 22.28MB/109.1MB 18:32:03 55f2b468da67 Extracting [=====================================> ] 195.5MB/257.9MB 18:32:03 c49e0ee60bfb Extracting [=================================================> ] 107MB/107.3MB 18:32:03 eabd8714fec9 Extracting [========================================> ] 304.2MB/375MB 18:32:03 c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB 18:32:03 e73cb4a42719 Extracting [===========> ] 25.62MB/109.1MB 18:32:04 e73cb4a42719 Extracting [============> ] 26.74MB/109.1MB 18:32:04 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB 18:32:04 eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB 18:32:04 e73cb4a42719 Extracting [==============> ] 30.64MB/109.1MB 18:32:04 55f2b468da67 Extracting [======================================> ] 199.4MB/257.9MB 18:32:04 eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB 18:32:04 e73cb4a42719 Extracting [===============> ] 33.42MB/109.1MB 18:32:04 55f2b468da67 Extracting [=======================================> ] 201.7MB/257.9MB 18:32:04 eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB 18:32:04 e73cb4a42719 Extracting [=================> ] 38.44MB/109.1MB 18:32:04 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB 18:32:04 e73cb4a42719 Extracting [====================> ] 44.56MB/109.1MB 18:32:04 eabd8714fec9 Extracting [=========================================> ] 311.4MB/375MB 18:32:04 55f2b468da67 Extracting [=======================================> ] 204.4MB/257.9MB 18:32:04 e73cb4a42719 Extracting [=======================> ] 50.69MB/109.1MB 18:32:04 eabd8714fec9 Extracting [=========================================> ] 313.1MB/375MB 18:32:04 55f2b468da67 Extracting [=======================================> ] 206.1MB/257.9MB 18:32:04 e73cb4a42719 Extracting [========================> ] 52.36MB/109.1MB 18:32:04 eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 18:32:05 e73cb4a42719 Extracting [=========================> ] 54.59MB/109.1MB 18:32:05 eabd8714fec9 Extracting [==========================================> ] 316.4MB/375MB 18:32:05 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB 18:32:05 c49e0ee60bfb Pull complete 18:32:05 e73cb4a42719 Extracting [==========================> ] 57.93MB/109.1MB 18:32:05 eabd8714fec9 Extracting [==========================================> ] 319.8MB/375MB 18:32:05 prometheus Pulled 18:32:05 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB 18:32:05 eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB 18:32:05 e73cb4a42719 Extracting [===========================> ] 59.6MB/109.1MB 18:32:05 e73cb4a42719 Extracting [==============================> ] 65.73MB/109.1MB 18:32:05 e73cb4a42719 Extracting [=================================> ] 72.97MB/109.1MB 18:32:05 eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB 18:32:05 e73cb4a42719 Extracting [===================================> ] 76.87MB/109.1MB 18:32:05 55f2b468da67 Extracting [=========================================> ] 212.2MB/257.9MB 18:32:05 384497dbce3b Extracting [> ] 557.1kB/63.48MB 18:32:05 eabd8714fec9 Extracting [===========================================> ] 326.4MB/375MB 18:32:05 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB 18:32:05 e73cb4a42719 Extracting [=====================================> ] 80.77MB/109.1MB 18:32:05 384497dbce3b Extracting [> ] 1.114MB/63.48MB 18:32:05 55f2b468da67 Extracting [=========================================> ] 215MB/257.9MB 18:32:05 e73cb4a42719 Extracting [=======================================> ] 85.23MB/109.1MB 18:32:05 eabd8714fec9 Extracting [===========================================> ] 328.1MB/375MB 18:32:05 55f2b468da67 Extracting [=========================================> ] 215.6MB/257.9MB 18:32:06 55f2b468da67 Extracting [=========================================> ] 216.1MB/257.9MB 18:32:06 e73cb4a42719 Extracting [=========================================> ] 91.36MB/109.1MB 18:32:06 eabd8714fec9 Extracting [===========================================> ] 328.7MB/375MB 18:32:06 384497dbce3b Extracting [=> ] 1.671MB/63.48MB 18:32:06 55f2b468da67 Extracting [==========================================> ] 219.5MB/257.9MB 18:32:06 e73cb4a42719 Extracting [==========================================> ] 93.03MB/109.1MB 18:32:06 eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB 18:32:06 55f2b468da67 Extracting [==========================================> ] 221.2MB/257.9MB 18:32:06 eabd8714fec9 Extracting [============================================> ] 330.9MB/375MB 18:32:06 e73cb4a42719 Extracting [===========================================> ] 94.7MB/109.1MB 18:32:06 384497dbce3b Extracting [=> ] 2.228MB/63.48MB 18:32:06 55f2b468da67 Extracting [==========================================> ] 221.7MB/257.9MB 18:32:06 e73cb4a42719 Extracting [===========================================> ] 95.26MB/109.1MB 18:32:06 eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB 18:32:06 55f2b468da67 Extracting [===========================================> ] 223.9MB/257.9MB 18:32:06 e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 18:32:06 384497dbce3b Extracting [==> ] 2.785MB/63.48MB 18:32:06 e73cb4a42719 Extracting [============================================> ] 97.48MB/109.1MB 18:32:06 eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB 18:32:06 55f2b468da67 Extracting [===========================================> ] 225.6MB/257.9MB 18:32:06 e73cb4a42719 Extracting [=============================================> ] 100.3MB/109.1MB 18:32:06 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB 18:32:06 eabd8714fec9 Extracting [============================================> ] 335.3MB/375MB 18:32:06 384497dbce3b Extracting [===> ] 4.456MB/63.48MB 18:32:06 e73cb4a42719 Extracting [==============================================> ] 102.5MB/109.1MB 18:32:07 eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB 18:32:07 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB 18:32:07 e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB 18:32:07 384497dbce3b Extracting [===> ] 5.014MB/63.48MB 18:32:07 55f2b468da67 Extracting [============================================> ] 230.6MB/257.9MB 18:32:07 e73cb4a42719 Extracting [================================================> ] 104.7MB/109.1MB 18:32:07 384497dbce3b Extracting [=====> ] 6.685MB/63.48MB 18:32:07 384497dbce3b Extracting [=====> ] 7.242MB/63.48MB 18:32:07 eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 18:32:07 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB 18:32:07 e73cb4a42719 Extracting [================================================> ] 105.3MB/109.1MB 18:32:07 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB 18:32:07 e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB 18:32:07 384497dbce3b Extracting [======> ] 7.799MB/63.48MB 18:32:08 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB 18:32:08 e73cb4a42719 Extracting [=================================================> ] 107MB/109.1MB 18:32:08 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB 18:32:08 384497dbce3b Extracting [======> ] 8.356MB/63.48MB 18:32:08 eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 18:32:08 55f2b468da67 Extracting [=============================================> ] 234MB/257.9MB 18:32:08 384497dbce3b Extracting [=======> ] 9.47MB/63.48MB 18:32:08 e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 18:32:08 e73cb4a42719 Extracting [=================================================> ] 108.1MB/109.1MB 18:32:08 eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 18:32:08 384497dbce3b Extracting [=======> ] 10.03MB/63.48MB 18:32:08 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB 18:32:08 e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 18:32:08 eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB 18:32:08 55f2b468da67 Extracting [==============================================> ] 237.3MB/257.9MB 18:32:08 384497dbce3b Extracting [=========> ] 11.7MB/63.48MB 18:32:08 55f2b468da67 Extracting [==============================================> ] 241.2MB/257.9MB 18:32:08 384497dbce3b Extracting [===========> ] 14.48MB/63.48MB 18:32:09 55f2b468da67 Extracting [==============================================> ] 241.8MB/257.9MB 18:32:09 384497dbce3b Extracting [============> ] 16.15MB/63.48MB 18:32:09 eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB 18:32:09 eabd8714fec9 Extracting [==============================================> ] 350.4MB/375MB 18:32:09 384497dbce3b Extracting [=============> ] 17.27MB/63.48MB 18:32:09 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 18:32:09 eabd8714fec9 Extracting [===============================================> ] 353.2MB/375MB 18:32:09 384497dbce3b Extracting [==============> ] 18.38MB/63.48MB 18:32:09 55f2b468da67 Extracting [================================================> ] 251.8MB/257.9MB 18:32:09 384497dbce3b Extracting [================> ] 21.17MB/63.48MB 18:32:09 eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 18:32:09 384497dbce3b Extracting [===================> ] 24.51MB/63.48MB 18:32:09 55f2b468da67 Extracting [=================================================> ] 256.2MB/257.9MB 18:32:09 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 18:32:09 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 18:32:09 eabd8714fec9 Extracting [================================================> ] 362.1MB/375MB 18:32:09 384497dbce3b Extracting [======================> ] 28.41MB/63.48MB 18:32:09 eabd8714fec9 Extracting [=================================================> ] 368.2MB/375MB 18:32:09 384497dbce3b Extracting [=========================> ] 31.75MB/63.48MB 18:32:09 eabd8714fec9 Extracting [=================================================> ] 373.2MB/375MB 18:32:09 384497dbce3b Extracting [===========================> ] 35.09MB/63.48MB 18:32:09 eabd8714fec9 Extracting [==================================================>] 375MB/375MB 18:32:10 e73cb4a42719 Pull complete 18:32:10 384497dbce3b Extracting [============================> ] 36.21MB/63.48MB 18:32:11 384497dbce3b Extracting [=============================> ] 37.32MB/63.48MB 18:32:11 384497dbce3b Extracting [================================> ] 41.22MB/63.48MB 18:32:11 384497dbce3b Extracting [===================================> ] 45.12MB/63.48MB 18:32:11 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 18:32:11 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 18:32:11 384497dbce3b Extracting [======================================> ] 49.02MB/63.48MB 18:32:11 384497dbce3b Extracting [=======================================> ] 49.58MB/63.48MB 18:32:11 384497dbce3b Extracting [=========================================> ] 52.92MB/63.48MB 18:32:11 55f2b468da67 Pull complete 18:32:12 384497dbce3b Extracting [==============================================> ] 59.05MB/63.48MB 18:32:12 384497dbce3b Extracting [=================================================> ] 62.39MB/63.48MB 18:32:12 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 18:32:12 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 18:32:12 82bfc142787e Extracting [> ] 98.3kB/8.613MB 18:32:12 82bfc142787e Extracting [===============> ] 2.753MB/8.613MB 18:32:12 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 18:32:13 eabd8714fec9 Pull complete 18:32:14 a83b68436f09 Pull complete 18:32:14 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 18:32:14 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 18:32:14 384497dbce3b Pull complete 18:32:14 787d6bee9571 Extracting [==================================================>] 127B/127B 18:32:14 787d6bee9571 Extracting [==================================================>] 127B/127B 18:32:14 82bfc142787e Pull complete 18:32:14 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 18:32:14 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 18:32:14 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 18:32:14 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 18:32:15 45fd2fec8a19 Pull complete 18:32:15 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 18:32:15 055b9255fa03 Pull complete 18:32:15 787d6bee9571 Pull complete 18:32:15 13ff0988aaea Extracting [==================================================>] 167B/167B 18:32:15 13ff0988aaea Extracting [==================================================>] 167B/167B 18:32:15 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 18:32:15 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 18:32:15 46baca71a4ef Pull complete 18:32:15 8f10199ed94b Extracting [========> ] 1.573MB/8.768MB 18:32:15 13ff0988aaea Pull complete 18:32:15 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 18:32:15 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 18:32:15 b176d7edde70 Pull complete 18:32:15 b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 18:32:15 grafana Pulled 18:32:15 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 18:32:15 8f10199ed94b Pull complete 18:32:15 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 18:32:15 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 18:32:15 b0e0ef7895f4 Extracting [==================> ] 13.37MB/37.01MB 18:32:15 4b82842ab819 Pull complete 18:32:15 7e568a0dc8fb Extracting [==================================================>] 184B/184B 18:32:15 7e568a0dc8fb Extracting [==================================================>] 184B/184B 18:32:15 f963a77d2726 Pull complete 18:32:15 b0e0ef7895f4 Extracting [======================================> ] 28.31MB/37.01MB 18:32:15 f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 18:32:15 b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 18:32:15 7e568a0dc8fb Pull complete 18:32:15 postgres Pulled 18:32:15 b0e0ef7895f4 Pull complete 18:32:15 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 18:32:15 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 18:32:15 f3a82e9f1761 Extracting [=================> ] 15.14MB/44.41MB 18:32:15 c0c90eeb8aca Pull complete 18:32:15 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 18:32:15 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 18:32:15 f3a82e9f1761 Extracting [===================================> ] 31.65MB/44.41MB 18:32:15 f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 18:32:15 5cfb27c10ea5 Pull complete 18:32:15 40a5eed61bb0 Extracting [==================================================>] 98B/98B 18:32:15 40a5eed61bb0 Extracting [==================================================>] 98B/98B 18:32:15 f3a82e9f1761 Pull complete 18:32:15 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 18:32:15 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 18:32:15 40a5eed61bb0 Pull complete 18:32:15 e040ea11fa10 Extracting [==================================================>] 173B/173B 18:32:15 e040ea11fa10 Extracting [==================================================>] 173B/173B 18:32:15 79161a3f5362 Pull complete 18:32:15 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 18:32:15 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 18:32:15 e040ea11fa10 Pull complete 18:32:15 9c266ba63f51 Pull complete 18:32:15 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 18:32:15 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 18:32:16 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 18:32:16 2e8a7df9c2ee Pull complete 18:32:16 10f05dd8b1db Extracting [==================================================>] 98B/98B 18:32:16 10f05dd8b1db Extracting [==================================================>] 98B/98B 18:32:16 09d5a3f70313 Extracting [=====> ] 11.7MB/109.2MB 18:32:16 10f05dd8b1db Pull complete 18:32:16 41dac8b43ba6 Extracting [==================================================>] 171B/171B 18:32:16 41dac8b43ba6 Extracting [==================================================>] 171B/171B 18:32:16 09d5a3f70313 Extracting [============> ] 27.85MB/109.2MB 18:32:16 41dac8b43ba6 Pull complete 18:32:16 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 18:32:16 09d5a3f70313 Extracting [====================> ] 44.56MB/109.2MB 18:32:16 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 18:32:16 71a9f6a9ab4d Pull complete 18:32:16 09d5a3f70313 Extracting [=============================> ] 63.5MB/109.2MB 18:32:16 da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 18:32:16 09d5a3f70313 Extracting [=====================================> ] 82.44MB/109.2MB 18:32:16 da3ed5db7103 Extracting [=====> ] 14.48MB/127.4MB 18:32:16 09d5a3f70313 Extracting [=============================================> ] 99.16MB/109.2MB 18:32:16 da3ed5db7103 Extracting [==========> ] 26.18MB/127.4MB 18:32:16 09d5a3f70313 Extracting [================================================> ] 107MB/109.2MB 18:32:16 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 18:32:16 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 18:32:16 da3ed5db7103 Extracting [===============> ] 40.11MB/127.4MB 18:32:16 09d5a3f70313 Pull complete 18:32:16 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 18:32:16 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 18:32:16 da3ed5db7103 Extracting [======================> ] 56.82MB/127.4MB 18:32:16 356f5c2c843b Pull complete 18:32:17 kafka Pulled 18:32:17 da3ed5db7103 Extracting [=============================> ] 75.2MB/127.4MB 18:32:17 da3ed5db7103 Extracting [=====================================> ] 95.26MB/127.4MB 18:32:17 da3ed5db7103 Extracting [============================================> ] 112.5MB/127.4MB 18:32:17 da3ed5db7103 Extracting [===============================================> ] 120.9MB/127.4MB 18:32:17 da3ed5db7103 Extracting [=================================================> ] 125.3MB/127.4MB 18:32:17 da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB 18:32:17 da3ed5db7103 Pull complete 18:32:17 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 18:32:17 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 18:32:17 c955f6e31a04 Pull complete 18:32:17 zookeeper Pulled 18:32:17 Network compose_default Creating 18:32:17 Network compose_default Created 18:32:17 Container zookeeper Creating 18:32:17 Container postgres Creating 18:32:17 Container prometheus Creating 18:32:34 Container postgres Created 18:32:34 Container zookeeper Created 18:32:34 Container prometheus Created 18:32:34 Container kafka Creating 18:32:34 Container grafana Creating 18:32:34 Container policy-db-migrator Creating 18:32:34 Container grafana Created 18:32:34 Container policy-db-migrator Created 18:32:34 Container policy-api Creating 18:32:34 Container kafka Created 18:32:35 Container policy-api Created 18:32:35 Container policy-pap Creating 18:32:35 Container policy-pap Created 18:32:35 Container policy-xacml-pdp Creating 18:32:35 Container policy-xacml-pdp Created 18:32:35 Container prometheus Starting 18:32:35 Container postgres Starting 18:32:35 Container zookeeper Starting 18:32:36 Container zookeeper Started 18:32:36 Container kafka Starting 18:32:36 Container kafka Started 18:32:38 Container postgres Started 18:32:38 Container policy-db-migrator Starting 18:32:39 Container policy-db-migrator Started 18:32:39 Container policy-api Starting 18:32:40 Container policy-api Started 18:32:40 Container policy-pap Starting 18:32:40 Container policy-pap Started 18:32:40 Container policy-xacml-pdp Starting 18:32:41 Container prometheus Started 18:32:41 Container grafana Starting 18:32:42 Container policy-xacml-pdp Started 18:32:43 Container grafana Started 18:32:43 Prometheus server: http://localhost:30259 18:32:43 Grafana server: http://localhost:30269 18:32:43 Waiting 1 minute for xacml-pdp to start... 18:33:43 Checking if REST port 30004 is open on localhost ... 18:33:43 IMAGE NAMES STATUS 18:33:43 nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute 18:33:43 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute 18:33:43 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute 18:33:43 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute 18:33:43 nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute 18:33:43 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute 18:33:43 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute 18:33:43 nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute 18:33:43 Cloning into '/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/csit/resources/tests/models'... 18:33:44 Building robot framework docker image 18:34:23 sha256:53c0454e4fa8e231e0d6aba9040e3408ace50506b8a72de13f7efe5ee54e35de 18:34:27 top - 18:34:27 up 4 min, 0 users, load average: 2.24, 1.68, 0.73 18:34:27 Tasks: 230 total, 1 running, 151 sleeping, 0 stopped, 0 zombie 18:34:27 %Cpu(s): 13.9 us, 3.1 sy, 0.0 ni, 77.7 id, 5.1 wa, 0.0 hi, 0.1 si, 0.1 st 18:34:27 18:34:27 total used free shared buff/cache available 18:34:27 Mem: 31G 2.5G 21G 27M 7.1G 28G 18:34:27 Swap: 1.0G 0B 1.0G 18:34:27 18:34:27 IMAGE NAMES STATUS 18:34:27 nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute 18:34:27 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute 18:34:27 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute 18:34:27 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute 18:34:27 nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute 18:34:27 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute 18:34:27 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute 18:34:27 nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute 18:34:27 18:34:30 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 18:34:30 42b6b53ec07f policy-xacml-pdp 1.87% 176.2MiB / 31.41GiB 0.55% 45.1kB / 55.5kB 0B / 4.1kB 51 18:34:30 daf2f447f282 policy-pap 0.97% 517MiB / 31.41GiB 1.61% 2.14MB / 1.06MB 0B / 139MB 68 18:34:30 91e6420c4be3 policy-api 0.20% 415.2MiB / 31.41GiB 1.29% 1.14MB / 986kB 0B / 0B 59 18:34:30 c97c08af22ba kafka 1.53% 390MiB / 31.41GiB 1.21% 186kB / 175kB 0B / 582kB 83 18:34:30 331bcc1edeb1 grafana 1.13% 109.5MiB / 31.41GiB 0.34% 19.1MB / 201kB 0B / 31.4MB 22 18:34:30 574b837ab926 zookeeper 0.17% 84.4MiB / 31.41GiB 0.26% 55.3kB / 46.9kB 4.1kB / 430kB 62 18:34:30 8bd6e6ee3690 prometheus 0.00% 20.32MiB / 31.41GiB 0.06% 62.8kB / 3.44kB 225kB / 0B 11 18:34:30 f42bcd06be9b postgres 0.54% 86.13MiB / 31.41GiB 0.27% 2.56MB / 3.74MB 0B / 157MB 26 18:34:30 18:34:30 Container policy-csit Creating 18:34:30 Container policy-csit Created 18:34:30 Attaching to policy-csit 18:34:31 policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot 18:34:31 policy-csit | Run Robot test 18:34:31 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 18:34:31 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 18:34:31 policy-csit | -v POLICY_API_IP:policy-api:6969 18:34:31 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 18:34:31 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 18:34:31 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 18:34:31 policy-csit | -v APEX_IP:policy-apex-pdp:6969 18:34:31 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 18:34:31 policy-csit | -v KAFKA_IP:kafka:9092 18:34:31 policy-csit | -v PROMETHEUS_IP:prometheus:9090 18:34:31 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 18:34:31 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 18:34:31 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 18:34:31 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 18:34:31 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 18:34:31 policy-csit | -v TEMP_FOLDER:/tmp/distribution 18:34:31 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 18:34:31 policy-csit | -v TEST_ENV:docker 18:34:31 policy-csit | -v JAEGER_IP:jaeger:16686 18:34:31 policy-csit | Starting Robot test suites ... 18:34:31 policy-csit | ============================================================================== 18:34:31 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas 18:34:31 policy-csit | ============================================================================== 18:34:31 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test 18:34:31 policy-csit | ============================================================================== 18:34:31 policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | 18:34:31 policy-csit | ------------------------------------------------------------------------------ 18:34:31 policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | 18:34:31 policy-csit | ------------------------------------------------------------------------------ 18:34:31 policy-csit | MakeTopics :: Creates the Policy topics | PASS | 18:34:31 policy-csit | ------------------------------------------------------------------------------ 18:34:59 policy-csit | ExecuteXacmlPolicy | PASS | 18:34:59 policy-csit | ------------------------------------------------------------------------------ 18:34:59 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | 18:34:59 policy-csit | 4 tests, 4 passed, 0 failed 18:34:59 policy-csit | ============================================================================== 18:34:59 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas 18:34:59 policy-csit | ============================================================================== 18:35:59 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 18:35:59 policy-csit | ------------------------------------------------------------------------------ 18:36:00 policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | 18:36:00 policy-csit | ------------------------------------------------------------------------------ 18:36:00 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | 18:36:00 policy-csit | 2 tests, 2 passed, 0 failed 18:36:00 policy-csit | ============================================================================== 18:36:00 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | 18:36:00 policy-csit | 6 tests, 6 passed, 0 failed 18:36:00 policy-csit | ============================================================================== 18:36:00 policy-csit | Output: /tmp/results/output.xml 18:36:00 policy-csit | Log: /tmp/results/log.html 18:36:00 policy-csit | Report: /tmp/results/report.html 18:36:00 policy-csit | RESULT: 0 18:36:00 policy-csit exited with code 0 18:36:00 IMAGE NAMES STATUS 18:36:00 nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up 3 minutes 18:36:00 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 18:36:00 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 18:36:00 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 18:36:00 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 18:36:00 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 18:36:00 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 18:36:00 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 18:36:00 Shut down started! 18:36:02 Collecting logs from docker compose containers... 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677146236Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-16T18:32:43Z 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677449748Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677460838Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677464858Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677468238Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677471618Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677474928Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677477618Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677481108Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677485458Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677488529Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677491829Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677495579Z level=info msg=Target target=[all] 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677508959Z level=info msg="Path Home" path=/usr/share/grafana 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677512129Z level=info msg="Path Data" path=/var/lib/grafana 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677515129Z level=info msg="Path Logs" path=/var/log/grafana 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677518199Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677521419Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 18:36:02 grafana | logger=settings t=2025-06-16T18:32:43.677524929Z level=info msg="App mode production" 18:36:02 grafana | logger=featuremgmt t=2025-06-16T18:32:43.677867071Z level=info msg=FeatureToggles logRowsPopoverMenu=true logsPanelControls=true recoveryThreshold=true prometheusAzureOverrideAudience=true dashgpt=true onPremToCloudMigrations=true formatString=true dashboardScene=true alertRuleRestore=true angularDeprecationUI=true pluginsDetailsRightPanel=true logsExploreTableVisualisation=true cloudWatchCrossAccountQuerying=true logsContextDatasourceUi=true prometheusUsesCombobox=true alertingSimplifiedRouting=true lokiLabelNamesQueryApi=true awsAsyncQueryCaching=true transformationsRedesign=true azureMonitorEnableUserAuth=true annotationPermissionUpdate=true correlations=true alertingRulePermanentlyDelete=true cloudWatchNewLabelParsing=true nestedFolders=true alertingInsights=true ssoSettingsSAML=true newDashboardSharingComponent=true alertingQueryAndExpressionsStepMode=true newFiltersUI=true unifiedStorageSearchPermissionFiltering=true kubernetesClientDashboardsFolders=true alertingRuleVersionHistoryRestore=true azureMonitorPrometheusExemplars=true grafanaconThemes=true alertingUIOptimizeReducer=true useSessionStorageForRedirection=true dashboardSceneForViewers=true kubernetesPlaylists=true influxdbBackendMigration=true dashboardSceneSolo=true alertingNotificationsStepMode=true externalCorePlugins=true publicDashboardsScene=true failWrongDSUID=true unifiedRequestLog=true cloudWatchRoundUpEndTime=true pinNavItems=true lokiStructuredMetadata=true promQLScope=true dataplaneFrontendFallback=true reportingUseRawTimeRange=true lokiQuerySplitting=true recordedQueriesMulti=true lokiQueryHints=true ssoSettingsApi=true alertingApiServer=true tlsMemcached=true groupToNestedTableTransformation=true logsInfiniteScrolling=true preinstallAutoUpdate=true panelMonitoring=true addFieldFromCalculationStatFunctions=true alertingRuleRecoverDeleted=true newPDFRendering=true 18:36:02 grafana | logger=sqlstore t=2025-06-16T18:32:43.677924492Z level=info msg="Connecting to DB" dbtype=sqlite3 18:36:02 grafana | logger=sqlstore t=2025-06-16T18:32:43.677937462Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.679576545Z level=info msg="Locking database" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.679590235Z level=info msg="Starting DB migrations" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.680268131Z level=info msg="Executing migration" id="create migration_log table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.681151958Z level=info msg="Migration successfully executed" id="create migration_log table" duration=883.147µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.684647167Z level=info msg="Executing migration" id="create user table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.685272852Z level=info msg="Migration successfully executed" id="create user table" duration=625.065µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.689635007Z level=info msg="Executing migration" id="add unique index user.login" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.690333093Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=697.876µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.696362593Z level=info msg="Executing migration" id="add unique index user.email" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.697610583Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.24736ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.701475995Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.702581094Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.104179ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.706213334Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.707272443Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.058779ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.713667866Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.716344558Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.676232ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.719413953Z level=info msg="Executing migration" id="create user table v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.720666023Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.2513ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.724337553Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.7251776Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=847.297µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.728642348Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.729182843Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=540.155µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.732261518Z level=info msg="Executing migration" id="copy data_source v1 to v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.732572681Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=310.653µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.738335218Z level=info msg="Executing migration" id="Drop old table user_v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.738815962Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=480.564µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.741766116Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.742623233Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=856.447µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.745597327Z level=info msg="Executing migration" id="Update user table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.745649868Z level=info msg="Migration successfully executed" id="Update user table charset" duration=52.841µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.748679613Z level=info msg="Executing migration" id="Add last_seen_at column to user" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.749480799Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=800.676µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.755969143Z level=info msg="Executing migration" id="Add missing user data" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.756334126Z level=info msg="Migration successfully executed" id="Add missing user data" duration=368.073µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.782964245Z level=info msg="Executing migration" id="Add is_disabled column to user" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.78609718Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=3.132275ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.790244234Z level=info msg="Executing migration" id="Add index user.login/user.email" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.79098551Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=740.786µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.795692039Z level=info msg="Executing migration" id="Add is_service_account column to user" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.796852179Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.161839ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.800938492Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.810681772Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.74361ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.814436654Z level=info msg="Executing migration" id="Add uid column to user" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.81531132Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=875.046µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.818961071Z level=info msg="Executing migration" id="Update uid column values for users" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.819122632Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=158.791µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.823380927Z level=info msg="Executing migration" id="Add unique index user_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.824064693Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=686.186µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.829498927Z level=info msg="Executing migration" id="Add is_provisioned column to user" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.831614975Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=2.114578ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.835521257Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.836264262Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=742.655µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.841490575Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.842004939Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=514.214µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.845516999Z level=info msg="Executing migration" id="update login and email fields to lowercase" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.845975342Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=457.724µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.84927655Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.849645833Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=369.193µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.85428399Z level=info msg="Executing migration" id="create temp user table v1-7" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.855541091Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.256961ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.860463382Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.86155462Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.091328ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.865362772Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.866068808Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=705.676µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.869740677Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.870484014Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=743.217µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.875314243Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.876265091Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=949.688µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.880770248Z level=info msg="Executing migration" id="Update temp_user table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.880812919Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=40.281µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.885230125Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.886640076Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.409551ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.890404418Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.891448266Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.045168ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.896223525Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.896701158Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=477.673µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.901345497Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.902295264Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=949.417µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.907119495Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.911944764Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.824559ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.915959577Z level=info msg="Executing migration" id="create temp_user v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.917041216Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.064559ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.921692854Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.922420891Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=727.817µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.925952659Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.926662875Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=709.676µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.93092109Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.931644526Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=723.286µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.935138125Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.93582149Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=683.155µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.939725872Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.940092765Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=366.643µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.944884455Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.946133695Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=1.25279ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.951657121Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.952037424Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=380.543µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.955447271Z level=info msg="Executing migration" id="create star table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.956074857Z level=info msg="Migration successfully executed" id="create star table" duration=627.186µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.959604355Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.960347552Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=745.227µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.964732648Z level=info msg="Executing migration" id="Add column dashboard_uid in star" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.966851135Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=2.116867ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.970750267Z level=info msg="Executing migration" id="Add column org_id in star" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.972955216Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=2.204469ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.976479494Z level=info msg="Executing migration" id="Add column updated in star" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.977845215Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.365421ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.981743148Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.982695266Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=951.698µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.987251523Z level=info msg="Executing migration" id="create org table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.987972039Z level=info msg="Migration successfully executed" id="create org table v1" duration=720.236µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.991624729Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.992316775Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=689.336µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.998532735Z level=info msg="Executing migration" id="create org_user table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:43.999745945Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.2134ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.003608787Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.004971329Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.361602ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.009566666Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.010632375Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.065879ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.014263904Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.015093831Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=828.957µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.018582979Z level=info msg="Executing migration" id="Update org table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.018612299Z level=info msg="Migration successfully executed" id="Update org table charset" duration=30.1µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.022025647Z level=info msg="Executing migration" id="Update org_user table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.022049577Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=24.49µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.026870966Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.027069208Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=198.192µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.030564106Z level=info msg="Executing migration" id="create dashboard table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.031732006Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.16721ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.035410145Z level=info msg="Executing migration" id="add index dashboard.account_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.03709308Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.683295ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.045435727Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.046459725Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.022918ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.050573698Z level=info msg="Executing migration" id="create dashboard_tag table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.051378065Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=804.217µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.054909113Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.05573225Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=824.517µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.059258708Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.059960925Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=701.547µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.064419261Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.069870015Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.449374ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.073527285Z level=info msg="Executing migration" id="create dashboard v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.074296531Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=766.576µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.078849397Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.079772025Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=921.708µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.083723047Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.084633425Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=910.548µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.088017902Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.088423315Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=404.293µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.091659192Z level=info msg="Executing migration" id="drop table dashboard_v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.092391357Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=731.595µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.096470181Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.096486491Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=17.081µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.099619946Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.101425281Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.804595ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.104767998Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.108165075Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.414407ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.144307738Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.148600053Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=4.291515ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.153197871Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.153936316Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=738.225µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.157401625Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.160026405Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.62283ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.163437793Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.164590313Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.15141ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.169454662Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.170593981Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.138629ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.174777765Z level=info msg="Executing migration" id="Update dashboard table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.174801985Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=24.78µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.177982231Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.178006191Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=24.83µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.182011834Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.18391477Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.902466ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.186865974Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.188805069Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.936185ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.191918784Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.19387021Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.953196ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.199205673Z level=info msg="Executing migration" id="Add column uid in dashboard" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.201681533Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.47553ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.20492734Z level=info msg="Executing migration" id="Update uid column values in dashboard" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.205129792Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=201.942µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.208327147Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.209123884Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=796.147µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.213868812Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.214596928Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=727.516µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.218057037Z level=info msg="Executing migration" id="Update dashboard title length" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.218081647Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=24.87µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.222547933Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.224354647Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.805054ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.228989705Z level=info msg="Executing migration" id="create dashboard_provisioning" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.229751151Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=761.536µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.23332857Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.238595383Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.266203ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.242085091Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.242801908Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=716.506µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.246331696Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.247108562Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=776.276µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.252476305Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.25419733Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.719845ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.258240443Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.258791107Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=549.924µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.262398296Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.26288909Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=490.064µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.26787636Z level=info msg="Executing migration" id="Add check_sum column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.271770312Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.897082ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.275094929Z level=info msg="Executing migration" id="Add index for dashboard_title" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.2764051Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.309721ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.282834151Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.283078353Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=243.652µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.286064697Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.286494691Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=429.874µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.29006545Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.292315158Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=2.252628ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.297918914Z level=info msg="Executing migration" id="Add isPublic for dashboard" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.300436114Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.51659ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.305175383Z level=info msg="Executing migration" id="Add deleted for dashboard" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.306849246Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=1.673224ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.309943962Z level=info msg="Executing migration" id="Add index for deleted" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.310673257Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=729.005µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.314006454Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.316346633Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.339519ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.3209598Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.32337633Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.41581ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.326681496Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.327186451Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=500.985µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.330598128Z level=info msg="Executing migration" id="Add apiVersion for dashboard" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.333036128Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.43652ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.336323345Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.337443814Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=1.119529ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.34193321Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.342508895Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=574.885µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.345995163Z level=info msg="Executing migration" id="create data_source table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.347536496Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.541543ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.352624237Z level=info msg="Executing migration" id="add index data_source.account_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.354151649Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.592992ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.359280671Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.36034405Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.063119ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.363794318Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.364629845Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=835.377µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.367992622Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.368857109Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=863.807µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.373447716Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.380324092Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.875436ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.383975481Z level=info msg="Executing migration" id="create data_source table v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.384920339Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=944.838µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.38993264Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.390853037Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=917.337µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.394243095Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.397273699Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=3.025344ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.402043487Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.403069286Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.024979ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.407737734Z level=info msg="Executing migration" id="Add column with_credentials" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.411072741Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.334327ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.415618608Z level=info msg="Executing migration" id="Add secure json data column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.418147558Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.52783ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.421254013Z level=info msg="Executing migration" id="Update data_source table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.421352064Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=98.341µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.426335185Z level=info msg="Executing migration" id="Update initial version to 1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.426694398Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=358.533µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.430941882Z level=info msg="Executing migration" id="Add read_only data column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.43570979Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.768718ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.439622222Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.439972935Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=350.063µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.443170291Z level=info msg="Executing migration" id="Update json_data with nulls" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.443454564Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=283.683µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.448149231Z level=info msg="Executing migration" id="Add uid column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.450698932Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.548981ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.454979417Z level=info msg="Executing migration" id="Update uid value" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.455261369Z level=info msg="Migration successfully executed" id="Update uid value" duration=281.682µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.458588676Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.459466803Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=877.847µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.496865777Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.499120845Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=2.255218ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.503665852Z level=info msg="Executing migration" id="Add is_prunable column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.508221559Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=4.558627ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.511704927Z level=info msg="Executing migration" id="Add api_version column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.514789852Z level=info msg="Migration successfully executed" id="Add api_version column" duration=3.084005ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.521713918Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.521803159Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=90.471µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.526699428Z level=info msg="Executing migration" id="create api_key table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.528241411Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.541173ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.53183142Z level=info msg="Executing migration" id="add index api_key.account_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.532512455Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=681.025µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.535707352Z level=info msg="Executing migration" id="add index api_key.key" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.536342907Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=636.895µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.541425418Z level=info msg="Executing migration" id="add index api_key.account_id_name" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.54289626Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.472942ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.546792012Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.54784069Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.048777ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.552427547Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.553375194Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=947.387µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.558132824Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.558780118Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=646.574µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.561612611Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.570940287Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=9.326986ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.577008866Z level=info msg="Executing migration" id="create api_key table v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.578164246Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.15299ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.582782203Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.584228714Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.446531ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.587417801Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.588279617Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=861.666µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.593275308Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.594290797Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.014349ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.597916026Z level=info msg="Executing migration" id="copy api_key v1 to v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.598618112Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=701.506µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.602948417Z level=info msg="Executing migration" id="Drop old table api_key_v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.603768963Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=820.246µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.608474432Z level=info msg="Executing migration" id="Update api_key table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.608585512Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=110.32µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.61200985Z level=info msg="Executing migration" id="Add expires to api_key table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.614713492Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.702912ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.618197731Z level=info msg="Executing migration" id="Add service account foreign key" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.620806112Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.607662ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.62682221Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.627104372Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=281.552µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.631031534Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.634689703Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.657339ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.637809969Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.640523242Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.712832ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.643642607Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.644452183Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=808.906µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.649478194Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.65018295Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=701.466µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.654564805Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.656159348Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.593653ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.662313278Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.66392211Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.608302ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.667713991Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.669223944Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.509863ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.672775843Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.673658309Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=882.396µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.676907537Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.676962387Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=54.98µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.682828535Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.682981846Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=154.871µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.68841103Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.691541285Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.129665ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.694758191Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.697616024Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.857293ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.702913237Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.702986958Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=74.151µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.706273375Z level=info msg="Executing migration" id="create quota table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.707131801Z level=info msg="Migration successfully executed" id="create quota table v1" duration=857.436µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.71067289Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.71184271Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.16926ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.715139946Z level=info msg="Executing migration" id="Update quota table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.715208457Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=69.091µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.719645143Z level=info msg="Executing migration" id="create plugin_setting table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.72054435Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=898.687µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.723712416Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.724603863Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=890.977µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.728107371Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.731412259Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.304108ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.738807038Z level=info msg="Executing migration" id="Update plugin_setting table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.738918409Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=108.301µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.742065244Z level=info msg="Executing migration" id="update NULL org_id to 1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.742480888Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=415.214µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.745655874Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.759902979Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=14.247405ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.765312123Z level=info msg="Executing migration" id="create session table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.765949058Z level=info msg="Migration successfully executed" id="create session table" duration=636.615µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.769095653Z level=info msg="Executing migration" id="Drop old table playlist table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.769231675Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=118.452µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.772159229Z level=info msg="Executing migration" id="Drop old table playlist_item table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.77231772Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=157.841µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.780143433Z level=info msg="Executing migration" id="create playlist table v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.781249992Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.106399ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.785020233Z level=info msg="Executing migration" id="create playlist item table v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.78724301Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=2.221397ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.791391494Z level=info msg="Executing migration" id="Update playlist table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.791582076Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=193.992µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.795128265Z level=info msg="Executing migration" id="Update playlist_item table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.795267006Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=137.751µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.800291447Z level=info msg="Executing migration" id="Add playlist column created_at" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.803916506Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.624349ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.807924868Z level=info msg="Executing migration" id="Add playlist column updated_at" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.811384687Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.456469ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.850695385Z level=info msg="Executing migration" id="drop preferences table v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.851242629Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=549.714µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.857782933Z level=info msg="Executing migration" id="drop preferences table v3" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.857980354Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=196.561µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.861403531Z level=info msg="Executing migration" id="create preferences table v3" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.86244866Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.044849ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.865571125Z level=info msg="Executing migration" id="Update preferences table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.865650196Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=79.311µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.868802152Z level=info msg="Executing migration" id="Add column team_id in preferences" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.872223919Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.421227ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.878175247Z level=info msg="Executing migration" id="Update team_id column values in preferences" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.878509411Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=334.154µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.882839596Z level=info msg="Executing migration" id="Add column week_start in preferences" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.886093272Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.252976ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.889212307Z level=info msg="Executing migration" id="Add column preferences.json_data" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.891580417Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.36597ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.896038953Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.896102053Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=63.92µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.898925196Z level=info msg="Executing migration" id="Add preferences index org_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.899894134Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=968.868µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.904748073Z level=info msg="Executing migration" id="Add preferences index user_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.907813679Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=3.064776ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.91299499Z level=info msg="Executing migration" id="create alert table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.914296781Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.301711ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.917723199Z level=info msg="Executing migration" id="add index alert org_id & id " 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.918884658Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.160989ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.922137535Z level=info msg="Executing migration" id="add index alert state" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.923172684Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.034958ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.927908222Z level=info msg="Executing migration" id="add index alert dashboard_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.929403624Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.495153ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.933143914Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.9338712Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=726.876µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.937220377Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.938128464Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=907.307µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.942738761Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.944072632Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.337431ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.947903273Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.960237183Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=12.33434ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.964264726Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.964948781Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=681.735µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.969295446Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.970255454Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=959.168µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.973755653Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.974133946Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=377.413µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.978002648Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.978670333Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=666.745µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.982927777Z level=info msg="Executing migration" id="create alert_notification table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.984276078Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.346461ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.988084589Z level=info msg="Executing migration" id="Add column is_default" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.993034069Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.95001ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:44.997053482Z level=info msg="Executing migration" id="Add column frequency" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.00044708Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.393338ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.004468512Z level=info msg="Executing migration" id="Add column send_reminder" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.007267705Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.797863ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.01160168Z level=info msg="Executing migration" id="Add column disable_resolve_message" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.015368819Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.766439ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.018630767Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.019768446Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.16517ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.024795946Z level=info msg="Executing migration" id="Update alert table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.024869027Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=73.211µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.028271625Z level=info msg="Executing migration" id="Update alert_notification table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.028343785Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=72.39µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.031040807Z level=info msg="Executing migration" id="create notification_journal table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.032374488Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.332861ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.03880831Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.039866479Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.070748ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.044363545Z level=info msg="Executing migration" id="drop alert_notification_journal" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.045350213Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=986.108µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.048984032Z level=info msg="Executing migration" id="create alert_notification_state table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.050517314Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.532822ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.055066742Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.056520113Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.453181ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.060163932Z level=info msg="Executing migration" id="Add for to alert table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.064296766Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.132514ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.06848909Z level=info msg="Executing migration" id="Add column uid in alert_notification" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.072352921Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.863231ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.07714368Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.077633534Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=489.334µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.081608397Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.08330851Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.698663ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.087921397Z level=info msg="Executing migration" id="Remove unique index org_id_name" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.088812435Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=890.438µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.094006117Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.097985989Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.979192ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.101442707Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.101587418Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=145.691µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.105161788Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.10676538Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.603263ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.112398036Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.113748397Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.350451ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.119690605Z level=info msg="Executing migration" id="Drop old annotation table v4" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.119851376Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=159.951µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.123492936Z level=info msg="Executing migration" id="create annotation table v5" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.125126069Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.632543ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.13026327Z level=info msg="Executing migration" id="add index annotation 0 v3" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.131544541Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.282021ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.135182301Z level=info msg="Executing migration" id="add index annotation 1 v3" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.136258479Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.075569ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.139662157Z level=info msg="Executing migration" id="add index annotation 2 v3" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.140558704Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=896.207µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.144603147Z level=info msg="Executing migration" id="add index annotation 3 v3" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.14623501Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.630233ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.150080051Z level=info msg="Executing migration" id="add index annotation 4 v3" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.150949358Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=868.857µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.154484147Z level=info msg="Executing migration" id="Update annotation table charset" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.154506447Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=22.55µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.158870012Z level=info msg="Executing migration" id="Add column region_id to annotation table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.165728928Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.858436ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.211030235Z level=info msg="Executing migration" id="Drop category_id index" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.212267305Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.23705ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.216160087Z level=info msg="Executing migration" id="Add column tags to annotation table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.223351215Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=7.192108ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.227877321Z level=info msg="Executing migration" id="Create annotation_tag table v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.228456656Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=578.855µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.231798003Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.233465016Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.666513ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.238599488Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.240234191Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.633103ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.245401123Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.259518888Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.113574ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.263297837Z level=info msg="Executing migration" id="Create annotation_tag table v3" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.263858253Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=559.916µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.269483608Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.270462856Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=978.918µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.274763481Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.275312985Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=548.564µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.278945715Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.279532059Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=585.864µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.283100209Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.283383621Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=282.652µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.287516244Z level=info msg="Executing migration" id="Add created time to annotation table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.296405136Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=8.894872ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.302036272Z level=info msg="Executing migration" id="Add updated time to annotation table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.306324356Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.287224ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.310332049Z level=info msg="Executing migration" id="Add index for created in annotation table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.311355008Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.019209ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.314770155Z level=info msg="Executing migration" id="Add index for updated in annotation table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.315756533Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=986.048µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.320541792Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.320981166Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=438.725µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.326240518Z level=info msg="Executing migration" id="Add epoch_end column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.331321409Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=5.080801ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.334583505Z level=info msg="Executing migration" id="Add index for epoch_end" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.335544283Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=959.898µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.340243231Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.340480453Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=236.272µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.343889011Z level=info msg="Executing migration" id="Move region to single row" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.344520395Z level=info msg="Migration successfully executed" id="Move region to single row" duration=629.254µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.348438967Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.35001859Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.584023ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.35363448Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.35491421Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.27953ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.35861369Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.360141403Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.527143ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.365300214Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.366142071Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=845.907µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.368928854Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.370083133Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.152949ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.37339931Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.374748861Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.349081ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.381547176Z level=info msg="Executing migration" id="Increase tags column to length 4096" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.381563476Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=19.31µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.384750702Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.384770982Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=18.75µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.387739316Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.387756896Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=18.3µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.392638946Z level=info msg="Executing migration" id="create test_data table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.394053387Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.413331ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.397710636Z level=info msg="Executing migration" id="create dashboard_version table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.398547984Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=836.938µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.403649805Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.404631753Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=981.678µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.410212348Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.411114545Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=901.877µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.414708285Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.414887296Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=179.021µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.418294674Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.418654356Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=353.362µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.421794492Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.421817773Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=24.351µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.427219526Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.435754945Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=8.534659ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.439215723Z level=info msg="Executing migration" id="create team table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.439757247Z level=info msg="Migration successfully executed" id="create team table" duration=540.984µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.442844552Z level=info msg="Executing migration" id="add index team.org_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.443470687Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=626.015µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.44754163Z level=info msg="Executing migration" id="add unique index team_org_id_name" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.449016142Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.473202ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.452630362Z level=info msg="Executing migration" id="Add column uid in team" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.457487741Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.856729ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.461152701Z level=info msg="Executing migration" id="Update uid column values in team" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.461416033Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=263.972µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.465979039Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.466968888Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=990.129µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.470494736Z level=info msg="Executing migration" id="Add column external_uid in team" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.475124084Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=4.628878ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.47954226Z level=info msg="Executing migration" id="Add column is_provisioned in team" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.48454254Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.99728ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.489173458Z level=info msg="Executing migration" id="create team member table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.490466178Z level=info msg="Migration successfully executed" id="create team member table" duration=1.27107ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.494077907Z level=info msg="Executing migration" id="add index team_member.org_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.495119146Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.040579ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.498415432Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.499511131Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.095059ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.503849956Z level=info msg="Executing migration" id="add index team_member.team_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.504964146Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.11344ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.509396061Z level=info msg="Executing migration" id="Add column email to team table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.517421226Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=8.023465ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.52162596Z level=info msg="Executing migration" id="Add column external to team_member table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.528642417Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=7.017417ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.534110772Z level=info msg="Executing migration" id="Add column permission to team_member table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.538409476Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.297214ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.541863304Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.543081854Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=1.21816ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.546390951Z level=info msg="Executing migration" id="create dashboard acl table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.547479189Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.088488ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.583930105Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.586139323Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=2.211648ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.590527729Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.591185994Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=657.655µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.594535951Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.595406439Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=869.248µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.598511694Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.599609603Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.096889ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.603736266Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.604658313Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=921.077µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.609734564Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.610627332Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=892.288µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.614305721Z level=info msg="Executing migration" id="add index dashboard_permission" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.615999164Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.692633ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.620944045Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.621543959Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=600.504µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.62523086Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.625645803Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=416.773µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.630292711Z level=info msg="Executing migration" id="create tag table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.631314629Z level=info msg="Migration successfully executed" id="create tag table" duration=1.021497ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.635021639Z level=info msg="Executing migration" id="add index tag.key_value" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.636021777Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.000248ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.639556635Z level=info msg="Executing migration" id="create login attempt table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.640362472Z level=info msg="Migration successfully executed" id="create login attempt table" duration=805.887µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.644664847Z level=info msg="Executing migration" id="add index login_attempt.username" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.645635215Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=970.218µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.649120463Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.650032961Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=912.048µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.653893791Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.667767704Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=13.874173ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.672237601Z level=info msg="Executing migration" id="create login_attempt v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.673071217Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=833.626µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.676514426Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.677827036Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.31249ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.681308764Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.681656077Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=346.883µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.68705236Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.687810907Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=755.517µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.691116694Z level=info msg="Executing migration" id="create user auth table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.692114982Z level=info msg="Migration successfully executed" id="create user auth table" duration=997.679µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.69571286Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.696733899Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.021599ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.701041683Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.701058174Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=18.78µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.706080925Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.713540975Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.45823ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.717368426Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.72272135Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.352404ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.727123725Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.732986372Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.861887ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.73765238Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.743456577Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.804037ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.747057926Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.747996574Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=938.448µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.752227919Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.756104479Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.87582ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.760877129Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.767042818Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=6.164979ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.770488187Z level=info msg="Executing migration" id="create server_lock table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.771246682Z level=info msg="Migration successfully executed" id="create server_lock table" duration=760.915µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.775817259Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.776835888Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.018209ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.780434007Z level=info msg="Executing migration" id="create user auth token table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.78205485Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.627713ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.786270474Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.788520173Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.250579ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.79322892Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.794300739Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.071209ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.798286902Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.7993564Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.069818ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.802926479Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.810046077Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=7.119178ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.814584303Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.815527971Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=943.668µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.818856888Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.824360932Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=5.502534ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.82766201Z level=info msg="Executing migration" id="create cache_data table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.828668847Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.006877ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.832995122Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.834080611Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.085429ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.83760232Z level=info msg="Executing migration" id="create short_url table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.838518867Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=911.367µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.842081066Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.843399777Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.357111ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.847822863Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.847841373Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=19.58µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.851340201Z level=info msg="Executing migration" id="delete alert_definition table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.851446443Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=81.34µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.85491717Z level=info msg="Executing migration" id="recreate alert_definition table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.855855918Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=938.218µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.862534771Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.86357616Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.040989ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.869267096Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.87095945Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.691474ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.874554939Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.874576519Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=22.37µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.87828461Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.879380608Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.090368ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.883474701Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.884874762Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.398631ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.889185297Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.890955232Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.770215ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.896720409Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.898271621Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.551432ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.901849511Z level=info msg="Executing migration" id="Add column paused in alert_definition" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.90799051Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.140199ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.932872132Z level=info msg="Executing migration" id="drop alert_definition table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.93513001Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=2.256528ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.940993857Z level=info msg="Executing migration" id="delete alert_definition_version table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.94130287Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=311.173µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.944636277Z level=info msg="Executing migration" id="recreate alert_definition_version table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.94621903Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.581912ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.951244771Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.95232756Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.082569ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.956519354Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.957912845Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.392371ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.961812476Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.961843566Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=83.061µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.967486572Z level=info msg="Executing migration" id="drop alert_definition_version table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.96853253Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.045228ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.971503674Z level=info msg="Executing migration" id="create alert_instance table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.972624363Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.120159ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.975542967Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.97721923Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.671283ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.983631513Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.984678331Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.048888ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.987912447Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.996631698Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=8.717661ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:45.999751853Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.000418079Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=666.016µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.006680219Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.008123471Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.443112ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.011501219Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.036880934Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=25.376295ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.041793973Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.071769846Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=29.985013ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.077631043Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.07846371Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=832.977µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.084361958Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.086099702Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.737084ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.095790121Z level=info msg="Executing migration" id="add current_reason column related to current_state" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.104609702Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.78548ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.110139797Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.115188328Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.047481ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.118536564Z level=info msg="Executing migration" id="create alert_rule table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.119619843Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.083039ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.1254579Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.1266039Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.14616ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.132943872Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.134641615Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.696873ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.144700737Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.146312439Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.610032ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.150412063Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.150531434Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=120.161µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.154287884Z level=info msg="Executing migration" id="add column for to alert_rule" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.160539244Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.25049ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.164757618Z level=info msg="Executing migration" id="add column annotations to alert_rule" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.171416713Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.650694ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.177866605Z level=info msg="Executing migration" id="add column labels to alert_rule" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.184533758Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.666283ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.187840896Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.188621452Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=780.186µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.191904219Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.192867476Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=962.877µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.19958475Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.209221638Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.636348ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.212333463Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.216768999Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.434856ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.224905475Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.227071613Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=2.164988ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.230962385Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.239677795Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=8.71575ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.244574924Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.250901506Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.325942ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.285507726Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.285687608Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=180.412µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.291825617Z level=info msg="Executing migration" id="create alert_rule_version table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.293641591Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.820914ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.298447761Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.300200745Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.752234ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.30701915Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.308268531Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.239551ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.314560521Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.314667362Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=107.871µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.319572572Z level=info msg="Executing migration" id="add column for to alert_rule_version" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.326650889Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.078517ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.3341789Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.340772743Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.593193ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.348021922Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.355302671Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=7.274209ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.36003297Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.36641426Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.3806ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.369655287Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.376315031Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.659114ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.381946936Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.382039707Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=92.541µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.388931653Z level=info msg="Executing migration" id=create_alert_configuration_table 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.390572557Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.640004ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.395636877Z level=info msg="Executing migration" id="Add column default in alert_configuration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.404683971Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.047274ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.409172497Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.409328998Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=153.541µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.416144553Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.426126604Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=9.986951ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.429876515Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.430684221Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=807.446µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.43538457Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.441810531Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.425581ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.446121326Z level=info msg="Executing migration" id=create_ngalert_configuration_table 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.447072474Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=950.098µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.454266002Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.456664571Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=2.401159ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.461223818Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.468077334Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.852816ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.478061465Z level=info msg="Executing migration" id="create provenance_type table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.479652117Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.589742ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.486318142Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.487482191Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.163789ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.493137597Z level=info msg="Executing migration" id="create alert_image table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.494658649Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.520312ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.500302615Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.501441854Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.138849ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.506453604Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.506669786Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=217.622µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.51208435Z level=info msg="Executing migration" id=create_alert_configuration_history_table 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.513646342Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.564732ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.518063989Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.519223847Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.159268ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.522575825Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.523082779Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.527812888Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.528653744Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=846.266µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.537837988Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.539704713Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.865435ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.543856467Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.551957963Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.102066ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.555907555Z level=info msg="Executing migration" id="create library_element table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.556725231Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=817.366µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.561919703Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.56408448Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=2.162187ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.568000762Z level=info msg="Executing migration" id="create library_element_connection table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.569847998Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.846266ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.573492767Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.574665666Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.172319ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.580531304Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.581664553Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.132509ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.591283321Z level=info msg="Executing migration" id="increase max description length to 2048" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.591479262Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=195.121µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.596933747Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.597084568Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=151.301µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.647111823Z level=info msg="Executing migration" id="add library_element folder uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.655594301Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=8.479808ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.659894266Z level=info msg="Executing migration" id="populate library_element folder_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.660254719Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=357.553µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.664994758Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.665917495Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=922.017µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.674848908Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.675476992Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=627.004µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.682467579Z level=info msg="Executing migration" id="create data_keys table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.684174763Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.707484ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.691180369Z level=info msg="Executing migration" id="create secrets table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.692650131Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.469472ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.700294033Z level=info msg="Executing migration" id="rename data_keys name column to id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.740293086Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=39.996023ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.749880584Z level=info msg="Executing migration" id="add name column into data_keys" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.758104441Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=8.223357ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.762907161Z level=info msg="Executing migration" id="copy data_keys id column values into name" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.763165742Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=257.711µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.766789691Z level=info msg="Executing migration" id="rename data_keys name column to label" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.800838057Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=34.045936ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.813305929Z level=info msg="Executing migration" id="rename data_keys id column back to name" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.84691049Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=33.642403ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.854961185Z level=info msg="Executing migration" id="create kv_store table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.856619728Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.661903ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.865960564Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.867485486Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.523802ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.888527697Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.888871139Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=343.232µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.89750476Z level=info msg="Executing migration" id="create permission table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.899012132Z level=info msg="Migration successfully executed" id="create permission table" duration=1.507522ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.907083946Z level=info msg="Executing migration" id="add unique index permission.role_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.90869698Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.613884ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.91247939Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.914122164Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.642724ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.922181409Z level=info msg="Executing migration" id="create role table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.92357315Z level=info msg="Migration successfully executed" id="create role table" duration=1.391421ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.929051275Z level=info msg="Executing migration" id="add column display_name" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.93703226Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.979055ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.942840916Z level=info msg="Executing migration" id="add column group_name" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.949203808Z level=info msg="Migration successfully executed" id="add column group_name" duration=6.363962ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.986153107Z level=info msg="Executing migration" id="add index role.org_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.988595107Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=2.44549ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.995893596Z level=info msg="Executing migration" id="add unique index role_org_id_name" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.996936944Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.043348ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:46.999907558Z level=info msg="Executing migration" id="add index role_org_id_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.001894074Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.980156ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.015368004Z level=info msg="Executing migration" id="create team role table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.017032297Z level=info msg="Migration successfully executed" id="create team role table" duration=1.646783ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.023505649Z level=info msg="Executing migration" id="add index team_role.org_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.024690418Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.184799ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.028022716Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.029108275Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.084759ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.036819507Z level=info msg="Executing migration" id="add index team_role.team_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.038023637Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.21059ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.047517154Z level=info msg="Executing migration" id="create user role table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.049160187Z level=info msg="Migration successfully executed" id="create user role table" duration=1.642893ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.058084728Z level=info msg="Executing migration" id="add index user_role.org_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.059148967Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.063809ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.074062418Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.075821692Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.758514ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.082027212Z level=info msg="Executing migration" id="add index user_role.user_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.083116571Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.086039ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.09044229Z level=info msg="Executing migration" id="create builtin role table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.091816181Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.372981ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.103336615Z level=info msg="Executing migration" id="add index builtin_role.role_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.10526845Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.930865ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.109944388Z level=info msg="Executing migration" id="add index builtin_role.name" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.111261808Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.31792ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.116144219Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.127734392Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=11.590803ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.137850503Z level=info msg="Executing migration" id="add index builtin_role.org_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.13978188Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.934687ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.14363052Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.14481313Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.18175ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.151153412Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.153318048Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=2.163986ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.159365588Z level=info msg="Executing migration" id="add unique index role.uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.160442926Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.077238ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.165248505Z level=info msg="Executing migration" id="create seed assignment table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.166094192Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=844.637µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.171438425Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.172483194Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.044319ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.184029757Z level=info msg="Executing migration" id="add column hidden to role table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.195585281Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=11.554154ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.198529225Z level=info msg="Executing migration" id="permission kind migration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.205235649Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.705344ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.211380969Z level=info msg="Executing migration" id="permission attribute migration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.219449664Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.064286ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.225702604Z level=info msg="Executing migration" id="permission identifier migration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.240485714Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=14.78267ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.245473214Z level=info msg="Executing migration" id="add permission identifier index" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.246397122Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=923.868µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.250569446Z level=info msg="Executing migration" id="add permission action scope role_id index" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.251903027Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.332811ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.263113267Z level=info msg="Executing migration" id="remove permission role_id action scope index" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.264831471Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.716944ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.271489405Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.280384707Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=8.895022ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.284314909Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.285500628Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.181759ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.293884586Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.295949452Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=2.068206ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.301799209Z level=info msg="Executing migration" id="create query_history table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.303375663Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.579634ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.366288791Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.368505389Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.218328ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.378023896Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.378053157Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=30.63µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.383971354Z level=info msg="Executing migration" id="create query_history_details table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.385072203Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.129319ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.389740591Z level=info msg="Executing migration" id="rbac disabled migrator" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.389826492Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=86.601µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.398961566Z level=info msg="Executing migration" id="teams permissions migration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.399685811Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=723.665µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.404309749Z level=info msg="Executing migration" id="dashboard permissions" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.405253766Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=944.977µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.410549389Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.411332246Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=782.727µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.415672291Z level=info msg="Executing migration" id="drop managed folder create actions" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.415874763Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=202.392µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.424981835Z level=info msg="Executing migration" id="alerting notification permissions" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.425722552Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=740.547µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.429553873Z level=info msg="Executing migration" id="create query_history_star table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.430918594Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.364221ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.437219815Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.438618086Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.397791ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.451984064Z level=info msg="Executing migration" id="add column org_id in query_history_star" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.461294849Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.307845ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.468182615Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.468196995Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=14.87µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.472656031Z level=info msg="Executing migration" id="create correlation table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.474158853Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.494092ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.480264163Z level=info msg="Executing migration" id="add index correlations.uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.481571564Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.306641ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.488905363Z level=info msg="Executing migration" id="add index correlations.source_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.490020572Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.114749ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.496414073Z level=info msg="Executing migration" id="add correlation config column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.507487243Z level=info msg="Migration successfully executed" id="add correlation config column" duration=11.07295ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.511258604Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.512516604Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.25856ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.51944731Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.521182474Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.808134ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.527512916Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.548402895Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=20.896659ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.551489569Z level=info msg="Executing migration" id="create correlation v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.552436406Z level=info msg="Migration successfully executed" id="create correlation v2" duration=946.647µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.559679176Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.561705352Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=2.025345ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.565233841Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.566986655Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.752414ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.570756594Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.571828093Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.064669ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.578005754Z level=info msg="Executing migration" id="copy correlation v1 to v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.578578308Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=572.094µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.582650411Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.583923622Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.26506ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.591113709Z level=info msg="Executing migration" id="add provisioning column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.599386997Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.272868ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.608340128Z level=info msg="Executing migration" id="add type column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.621382915Z level=info msg="Migration successfully executed" id="add type column" duration=13.043607ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.625659149Z level=info msg="Executing migration" id="create entity_events table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.626387235Z level=info msg="Migration successfully executed" id="create entity_events table" duration=732.756µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.631026103Z level=info msg="Executing migration" id="create dashboard public config v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.63201599Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=989.767µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.636576428Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.637014021Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.640388538Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.640812122Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.64430176Z level=info msg="Executing migration" id="Drop old dashboard public config table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.645027086Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=725.076µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.652219424Z level=info msg="Executing migration" id="recreate dashboard public config v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.653424104Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.20436ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.656835111Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.658837547Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.002286ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.662480637Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.663658206Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.177319ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.668866819Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.670431071Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.570682ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.673884499Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.675610902Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.726253ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.679861757Z level=info msg="Executing migration" id="Drop public config table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.680854285Z level=info msg="Migration successfully executed" id="Drop public config table" duration=992.198µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.684654786Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.685997927Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.342361ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.690255421Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.691432211Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.17643ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.727890846Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.730772409Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.880073ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.735700108Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.736866828Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.16632ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.743860125Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.765975984Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=22.115749ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.769184579Z level=info msg="Executing migration" id="add annotations_enabled column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.775992675Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.807656ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.780220429Z level=info msg="Executing migration" id="add time_selection_enabled column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.788576266Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.355957ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.79397013Z level=info msg="Executing migration" id="delete orphaned public dashboards" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.794817117Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=847.647µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.798535497Z level=info msg="Executing migration" id="add share column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.808809121Z level=info msg="Migration successfully executed" id="add share column" duration=10.273384ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.811833795Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.811963556Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=131.321µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.814213114Z level=info msg="Executing migration" id="create file table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.81494736Z level=info msg="Migration successfully executed" id="create file table" duration=734.126µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.818998803Z level=info msg="Executing migration" id="file table idx: path natural pk" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.820333803Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.33441ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.825004521Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.82731736Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=2.304079ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.830729087Z level=info msg="Executing migration" id="create file_meta table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.831720866Z level=info msg="Migration successfully executed" id="create file_meta table" duration=992.389µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.836078541Z level=info msg="Executing migration" id="file table idx: path key" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.83723898Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.160139ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.840591687Z level=info msg="Executing migration" id="set path collation in file table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.840611077Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=19.94µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.843732752Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.843752832Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=20.79µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.849297217Z level=info msg="Executing migration" id="managed permissions migration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.850493087Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.19583ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.856934609Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.857132441Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=197.632µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.860434047Z level=info msg="Executing migration" id="RBAC action name migrator" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.861717308Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.282641ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.864976715Z level=info msg="Executing migration" id="Add UID column to playlist" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.874560751Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.585096ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.880186637Z level=info msg="Executing migration" id="Update uid column values in playlist" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.880572301Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=386.044µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.884152869Z level=info msg="Executing migration" id="Add index for uid in playlist" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.886203246Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.049727ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.889614854Z level=info msg="Executing migration" id="update group index for alert rules" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.890038908Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=424.144µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.89409595Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.894297502Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=201.482µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.901321589Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.902048624Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=726.695µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.906921994Z level=info msg="Executing migration" id="add action column to seed_assignment" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.918631288Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.709464ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.922020935Z level=info msg="Executing migration" id="add scope column to seed_assignment" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.930712246Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.693731ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.939411277Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.940550826Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.139369ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:47.943678211Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.019803026Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=76.125055ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.024774477Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.025941566Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.166639ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.037656321Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.039538076Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.881365ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.096470936Z level=info msg="Executing migration" id="add primary key to seed_assigment" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.127373896Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=30.90363ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.132579698Z level=info msg="Executing migration" id="add origin column to seed_assignment" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.140562492Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.982164ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.146339809Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.146700862Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=360.683µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.153707949Z level=info msg="Executing migration" id="prevent seeding OnCall access" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.154520415Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=775.565µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.163122575Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.163711159Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=588.514µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.167807102Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.168148725Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=341.303µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.171420662Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.171874035Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=452.854µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.182838214Z level=info msg="Executing migration" id="create folder table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.184190015Z level=info msg="Migration successfully executed" id="create folder table" duration=1.351151ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.189158475Z level=info msg="Executing migration" id="Add index for parent_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.191295543Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.137358ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.196374153Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.197492362Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.111129ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.205433347Z level=info msg="Executing migration" id="Update folder title length" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.205584138Z level=info msg="Migration successfully executed" id="Update folder title length" duration=93.8µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.214783582Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.217878178Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=3.083245ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.22321686Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.22452501Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.30816ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.22813044Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.22938944Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.25864ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.23555735Z level=info msg="Executing migration" id="Sync dashboard and folder table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.236455777Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=855.267µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.24915774Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.249618443Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=460.263µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.254165611Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.255813044Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.633833ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.26656371Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.267912241Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.347841ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.276680313Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.279165042Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=2.484799ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.285882596Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.287523749Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.640103ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.299073374Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.300716237Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.643743ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.310763158Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.31227192Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.508642ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.320650327Z level=info msg="Executing migration" id="create anon_device table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.321784137Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.13322ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.329535559Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.331682927Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.147308ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.34445004Z level=info msg="Executing migration" id="add index anon_device.updated_at" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.346749899Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.303799ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.359536372Z level=info msg="Executing migration" id="create signing_key table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.361242405Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.705843ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.367747408Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.36914887Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.401282ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.376012065Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.377998751Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.986276ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.383229443Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.383896208Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=667.485µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.391326459Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.403204315Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.878435ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.407028966Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.407658Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=629.584µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.413844361Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.414046323Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=202.732µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.420047301Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.422056387Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.008216ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.426131399Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.42620286Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=72.451µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.534389565Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.536519632Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.132547ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.54367231Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.545595996Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.923286ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.549395046Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.551241781Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.846325ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.555629317Z level=info msg="Executing migration" id="create sso_setting table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.556683195Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.053779ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.562937636Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.564390457Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.453582ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.567950776Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.568321479Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=373.383µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.571826217Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.572517933Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=691.076µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.5783554Z level=info msg="Executing migration" id="create cloud_migration table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.579806262Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.456172ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.587280603Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.588856215Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.539292ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.595025035Z level=info msg="Executing migration" id="add stack_id column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.604677223Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.651618ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.608815566Z level=info msg="Executing migration" id="add region_slug column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.61794602Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.137844ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.622588668Z level=info msg="Executing migration" id="add cluster_slug column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.63158327Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=8.993682ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.63530563Z level=info msg="Executing migration" id="add migration uid column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.64265032Z level=info msg="Migration successfully executed" id="add migration uid column" duration=7.34383ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.648941811Z level=info msg="Executing migration" id="Update uid column values for migration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.649118672Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=176.341µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.652760901Z level=info msg="Executing migration" id="Add unique index migration_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.653938321Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.17721ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.657270078Z level=info msg="Executing migration" id="add migration run uid column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.666634064Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.363266ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.671991197Z level=info msg="Executing migration" id="Update uid column values for migration run" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.672124608Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=133.501µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.678697021Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.680896939Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=2.204237ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.684474598Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.713592983Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=29.122245ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.717941978Z level=info msg="Executing migration" id="create cloud_migration_session v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.718893456Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=951.248µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.722643436Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.723805066Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.16183ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.741163086Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.74167809Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=514.975µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.745275859Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.747045904Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.767954ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.754720175Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.781007788Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=26.287323ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.78747834Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.788200316Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=721.626µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.791339731Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.793387988Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=2.047187ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.798198437Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.798525899Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=327.392µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.803027086Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.803853692Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=826.436µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.806992677Z level=info msg="Executing migration" id="add snapshot upload_url column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.816392964Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=9.406647ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.822585434Z level=info msg="Executing migration" id="add snapshot status column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.832192481Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=9.606447ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.835302567Z level=info msg="Executing migration" id="add snapshot local_directory column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.844807423Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=9.498636ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.878857009Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.89145397Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=12.598191ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.896862334Z level=info msg="Executing migration" id="add snapshot encryption_key column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.905346343Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=8.482949ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.919101504Z level=info msg="Executing migration" id="add snapshot error_string column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.930169163Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=11.067129ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.937368792Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.938586561Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=1.219249ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.941894648Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.977433225Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=35.532847ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.987005452Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:48.997501148Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=10.494476ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.001645581Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.012007655Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=10.360784ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.016996644Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.02873161Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=11.737856ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.034408346Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.043294327Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=8.884821ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.049958171Z level=info msg="Executing migration" id="increase resource_uid column length" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.049973781Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=15.88µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.053171617Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.053184227Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=13.27µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.05729446Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.069851801Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=12.557531ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.07588198Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.085548918Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.666298ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.089004676Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.08945654Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=451.204µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.093989046Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.094238289Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=248.583µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.099006927Z level=info msg="Executing migration" id="add record column to alert_rule table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.111259226Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=12.252689ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.11666024Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.125212729Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=8.543939ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.128260773Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.137812681Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=9.551968ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.142342367Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.15140872Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=9.065213ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.157191287Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.157727471Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=535.884µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.160948437Z level=info msg="Executing migration" id="add metadata column to alert_rule table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.172334959Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=11.385652ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.183312237Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.195309555Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=11.997468ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.234219699Z level=info msg="Executing migration" id="delete orphaned service account permissions" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.234850384Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=630.725µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.238940627Z level=info msg="Executing migration" id="adding action set permissions" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.239707943Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=777.586µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.243324172Z level=info msg="Executing migration" id="create user_external_session table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.245227797Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.902955ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.248783486Z level=info msg="Executing migration" id="increase name_id column length to 1024" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.248809647Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=27.231µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.255750293Z level=info msg="Executing migration" id="increase session_id column length to 1024" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.255776713Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=27.35µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.259274511Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.260012547Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=737.106µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.265506662Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.276237368Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=10.729736ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.280937236Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.290246771Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=9.309465ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.293276226Z level=info msg="Executing migration" id="add alert_rule_state table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.293968891Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=692.455µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.296922515Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.297773362Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=849.647µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.305927698Z level=info msg="Executing migration" id="add guid column to alert_rule table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.312988575Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=7.059887ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.315746357Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.322919135Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=7.172188ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.325834459Z level=info msg="Executing migration" id="cleanup alert_rule_version table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.325852679Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.32601345Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.32602896Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=194.681µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.329297436Z level=info msg="Executing migration" id="populate rule guid in alert rule table" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.330350125Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=1.045569ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.339293098Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.341761717Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.468129ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.345281486Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.347504433Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=2.222427ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.352820246Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.354162927Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.342281ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.360515169Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.363023269Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=2.46843ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.36817127Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.378591305Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=10.419695ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.382057942Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.391301518Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=9.238015ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.396301797Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.408743528Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=12.442451ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.413814819Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.423471687Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=9.656728ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.42752861Z level=info msg="Executing migration" id="remove the datasources:drilldown action" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.427785332Z level=info msg="Removed 0 datasources:drilldown permissions" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.427833472Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=303.412µs 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.431062238Z level=info msg="Executing migration" id="remove title in folder unique index" 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.432318168Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.25558ms 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.436842345Z level=info msg="migrations completed" performed=654 skipped=0 duration=5.756607374s 18:36:02 grafana | logger=migrator t=2025-06-16T18:32:49.438180576Z level=info msg="Unlocking database" 18:36:02 grafana | logger=sqlstore t=2025-06-16T18:32:49.455112462Z level=info msg="Created default admin" user=admin 18:36:02 grafana | logger=sqlstore t=2025-06-16T18:32:49.455474316Z level=info msg="Created default organization" 18:36:02 grafana | logger=secrets t=2025-06-16T18:32:49.463000746Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 18:36:02 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T18:32:49.561815445Z level=info msg="Restored cache from database" duration=478.775µs 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.570167012Z level=info msg="Locking database" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.570186642Z level=info msg="Starting DB migrations" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.577785723Z level=info msg="Executing migration" id="create resource_migration_log table" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.5785521Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=768.187µs 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.584715699Z level=info msg="Executing migration" id="Initialize resource tables" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.584731789Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=16.44µs 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.592866735Z level=info msg="Executing migration" id="drop table resource" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.592947276Z level=info msg="Migration successfully executed" id="drop table resource" duration=80.701µs 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.599472488Z level=info msg="Executing migration" id="create table resource" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.601141272Z level=info msg="Migration successfully executed" id="create table resource" duration=1.664634ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.606475394Z level=info msg="Executing migration" id="create table resource, index: 0" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.608449961Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.974037ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.612887837Z level=info msg="Executing migration" id="drop table resource_history" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.612989627Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=101.43µs 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.618956695Z level=info msg="Executing migration" id="create table resource_history" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.620607449Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.650034ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.626146574Z level=info msg="Executing migration" id="create table resource_history, index: 0" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.627481744Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.3311ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.630460509Z level=info msg="Executing migration" id="create table resource_history, index: 1" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.631673798Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.210079ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.636963821Z level=info msg="Executing migration" id="drop table resource_version" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.637190662Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=230.381µs 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.64553527Z level=info msg="Executing migration" id="create table resource_version" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.647064282Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.528502ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.650749892Z level=info msg="Executing migration" id="create table resource_version, index: 0" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.651935292Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.1843ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.655776953Z level=info msg="Executing migration" id="drop table resource_blob" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.655860984Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=84.431µs 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.659610223Z level=info msg="Executing migration" id="create table resource_blob" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.661464579Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.854026ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.668975419Z level=info msg="Executing migration" id="create table resource_blob, index: 0" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.671310268Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=2.333719ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.674307603Z level=info msg="Executing migration" id="create table resource_blob, index: 1" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.675587953Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.27976ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.679346133Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.689801568Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=10.454185ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.695161591Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.704585247Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=9.423766ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.708270827Z level=info msg="Executing migration" id="Add index to resource_history for polling" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.709170634Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=899.647µs 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.712261479Z level=info msg="Executing migration" id="Add index to resource for loading" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.713192787Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=930.868µs 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.71983788Z level=info msg="Executing migration" id="Add column folder in resource_history" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.730049063Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.210613ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.732919196Z level=info msg="Executing migration" id="Add column folder in resource" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.741497895Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=8.577549ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.744449059Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" 18:36:02 grafana | logger=deletion-marker-migrator t=2025-06-16T18:32:49.744479829Z level=info msg="finding any deletion markers" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.744921122Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=472.013µs 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.750591648Z level=info msg="Executing migration" id="Add index to resource_history for get trash" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.751916899Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.324721ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.756792729Z level=info msg="Executing migration" id="Add generation to resource history" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.767575036Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=10.781637ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.771124364Z level=info msg="Executing migration" id="Add generation index to resource history" 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.772175893Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=1.051109ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.775163907Z level=info msg="migrations completed" performed=26 skipped=0 duration=197.448444ms 18:36:02 grafana | logger=resource-migrator t=2025-06-16T18:32:49.77561574Z level=info msg="Unlocking database" 18:36:02 grafana | t=2025-06-16T18:32:49.775837672Z level=info caller=logger.go:214 time=2025-06-16T18:32:49.775816632Z msg="Using channel notifier" logger=sql-resource-server 18:36:02 grafana | logger=plugin.store t=2025-06-16T18:32:49.786588399Z level=info msg="Loading plugins..." 18:36:02 grafana | logger=plugins.registration t=2025-06-16T18:32:49.831786654Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" 18:36:02 grafana | logger=plugins.initialization t=2025-06-16T18:32:49.831815224Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" 18:36:02 grafana | logger=plugin.store t=2025-06-16T18:32:49.831915695Z level=info msg="Plugins loaded" count=53 duration=45.330466ms 18:36:02 grafana | logger=query_data t=2025-06-16T18:32:49.837810053Z level=info msg="Query Service initialization" 18:36:02 grafana | logger=live.push_http t=2025-06-16T18:32:49.8424232Z level=info msg="Live Push Gateway initialization" 18:36:02 grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-16T18:32:49.855723737Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 18:36:02 grafana | logger=ngalert t=2025-06-16T18:32:49.864870851Z level=info msg="Using simple database alert instance store" 18:36:02 grafana | logger=ngalert.state.manager.persist t=2025-06-16T18:32:49.864892741Z level=info msg="Using sync state persister" 18:36:02 grafana | logger=infra.usagestats.collector t=2025-06-16T18:32:49.867678674Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 18:36:02 grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:49.868417951Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 18:36:02 grafana | logger=ngalert.state.manager t=2025-06-16T18:32:49.869157776Z level=info msg="Warming state cache for startup" 18:36:02 grafana | logger=grafanaStorageLogger t=2025-06-16T18:32:49.869875232Z level=info msg="Storage starting" 18:36:02 grafana | logger=ngalert.multiorg.alertmanager t=2025-06-16T18:32:49.871159822Z level=info msg="Starting MultiOrg Alertmanager" 18:36:02 grafana | logger=http.server t=2025-06-16T18:32:49.871686127Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 18:36:02 grafana | logger=provisioning.datasources t=2025-06-16T18:32:49.971039299Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 18:36:02 grafana | logger=plugins.update.checker t=2025-06-16T18:32:49.973575799Z level=info msg="Update check succeeded" duration=105.24ms 18:36:02 grafana | logger=grafana.update.checker t=2025-06-16T18:32:49.976038029Z level=info msg="Update check succeeded" duration=107.210525ms 18:36:02 grafana | logger=sqlstore.transactions t=2025-06-16T18:32:49.983191607Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 18:36:02 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T18:32:50.019180537Z level=info msg="Patterns update finished" duration=149.622778ms 18:36:02 grafana | logger=ngalert.state.manager t=2025-06-16T18:32:50.025282537Z level=info msg="State cache has been initialized" states=0 duration=156.125621ms 18:36:02 grafana | logger=ngalert.scheduler t=2025-06-16T18:32:50.025337057Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 18:36:02 grafana | logger=ticker t=2025-06-16T18:32:50.025455858Z level=info msg=starting first_tick=2025-06-16T18:33:00Z 18:36:02 grafana | logger=provisioning.alerting t=2025-06-16T18:32:50.084786857Z level=info msg="starting to provision alerting" 18:36:02 grafana | logger=provisioning.alerting t=2025-06-16T18:32:50.084821377Z level=info msg="finished to provision alerting" 18:36:02 grafana | logger=provisioning.dashboard t=2025-06-16T18:32:50.086270238Z level=info msg="starting to provision dashboards" 18:36:02 grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.265802097Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 18:36:02 grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.266663084Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" 18:36:02 grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.267944435Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" 18:36:02 grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.269489187Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" 18:36:02 grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.270074591Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" 18:36:02 grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.270649277Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 18:36:02 grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.273297978Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" 18:36:02 grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.273845202Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" 18:36:02 grafana | logger=grafana-apiserver t=2025-06-16T18:32:50.274330845Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" 18:36:02 grafana | logger=app-registry t=2025-06-16T18:32:50.324575541Z level=info msg="app registry initialized" 18:36:02 grafana | logger=plugin.installer t=2025-06-16T18:32:50.34432456Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 18:36:02 grafana | logger=installer.fs t=2025-06-16T18:32:50.419192395Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" 18:36:02 grafana | logger=plugins.registration t=2025-06-16T18:32:50.447794155Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app 18:36:02 grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:50.447818075Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=579.377184ms 18:36:02 grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:50.447846196Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 18:36:02 grafana | logger=plugin.installer t=2025-06-16T18:32:50.719317356Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 18:36:02 grafana | logger=provisioning.dashboard t=2025-06-16T18:32:50.819457875Z level=info msg="finished to provision dashboards" 18:36:02 grafana | logger=installer.fs t=2025-06-16T18:32:50.852604631Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" 18:36:02 grafana | logger=plugins.registration t=2025-06-16T18:32:50.878873753Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app 18:36:02 grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:50.878900304Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=431.049788ms 18:36:02 grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:50.878925914Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 18:36:02 grafana | logger=plugin.installer t=2025-06-16T18:32:51.051016732Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 18:36:02 grafana | logger=installer.fs t=2025-06-16T18:32:51.107346426Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" 18:36:02 grafana | logger=plugins.registration t=2025-06-16T18:32:51.123012643Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app 18:36:02 grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:51.123035214Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=244.10503ms 18:36:02 grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:51.123059534Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 18:36:02 grafana | logger=plugin.installer t=2025-06-16T18:32:51.305441444Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 18:36:02 grafana | logger=installer.fs t=2025-06-16T18:32:51.360265166Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" 18:36:02 grafana | logger=plugins.registration t=2025-06-16T18:32:51.376998931Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app 18:36:02 grafana | logger=plugin.backgroundinstaller t=2025-06-16T18:32:51.377019061Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=253.954717ms 18:36:02 grafana | logger=infra.usagestats t=2025-06-16T18:34:28.878946113Z level=info msg="Usage stats are ready to report" 18:36:02 kafka | ===> User 18:36:02 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 18:36:02 kafka | ===> Configuring ... 18:36:02 kafka | Running in Zookeeper mode... 18:36:02 kafka | ===> Running preflight checks ... 18:36:02 kafka | ===> Check if /var/lib/kafka/data is writable ... 18:36:02 kafka | ===> Check if Zookeeper is healthy ... 18:36:02 kafka | [2025-06-16 18:32:40,538] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,538] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,538] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,538] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,538] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,538] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,538] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,538] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,539] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,539] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,539] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,539] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,539] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,539] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,539] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,539] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,539] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,539] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,542] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@221af3c0 (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,545] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 18:36:02 kafka | [2025-06-16 18:32:40,549] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 18:36:02 kafka | [2025-06-16 18:32:40,561] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 18:36:02 kafka | [2025-06-16 18:32:40,573] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 18:36:02 kafka | [2025-06-16 18:32:40,574] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 18:36:02 kafka | [2025-06-16 18:32:40,581] INFO Socket connection established, initiating session, client: /172.17.0.5:43366, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 18:36:02 kafka | [2025-06-16 18:32:40,604] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000026cbd0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 18:36:02 kafka | [2025-06-16 18:32:40,732] INFO Session: 0x10000026cbd0000 closed (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:40,732] INFO EventThread shut down for session: 0x10000026cbd0000 (org.apache.zookeeper.ClientCnxn) 18:36:02 kafka | Using log4j config /etc/kafka/log4j.properties 18:36:02 kafka | ===> Launching ... 18:36:02 kafka | ===> Launching kafka ... 18:36:02 kafka | [2025-06-16 18:32:41,438] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 18:36:02 kafka | [2025-06-16 18:32:41,752] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 18:36:02 kafka | [2025-06-16 18:32:41,826] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 18:36:02 kafka | [2025-06-16 18:32:41,827] INFO starting (kafka.server.KafkaServer) 18:36:02 kafka | [2025-06-16 18:32:41,828] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 18:36:02 kafka | [2025-06-16 18:32:41,840] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,844] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,846] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@5d8bafa9 (org.apache.zookeeper.ZooKeeper) 18:36:02 kafka | [2025-06-16 18:32:41,849] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 18:36:02 kafka | [2025-06-16 18:32:41,854] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 18:36:02 kafka | [2025-06-16 18:32:41,857] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 18:36:02 kafka | [2025-06-16 18:32:41,858] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 18:36:02 kafka | [2025-06-16 18:32:41,862] INFO Socket connection established, initiating session, client: /172.17.0.5:50420, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 18:36:02 kafka | [2025-06-16 18:32:41,872] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000026cbd0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 18:36:02 kafka | [2025-06-16 18:32:41,878] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 18:36:02 kafka | [2025-06-16 18:32:42,193] INFO Cluster ID = DURHhdNSQwy0Fksygi2p2A (kafka.server.KafkaServer) 18:36:02 kafka | [2025-06-16 18:32:42,197] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 18:36:02 kafka | [2025-06-16 18:32:42,253] INFO KafkaConfig values: 18:36:02 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 18:36:02 kafka | alter.config.policy.class.name = null 18:36:02 kafka | alter.log.dirs.replication.quota.window.num = 11 18:36:02 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 18:36:02 kafka | authorizer.class.name = 18:36:02 kafka | auto.create.topics.enable = true 18:36:02 kafka | auto.include.jmx.reporter = true 18:36:02 kafka | auto.leader.rebalance.enable = true 18:36:02 kafka | background.threads = 10 18:36:02 kafka | broker.heartbeat.interval.ms = 2000 18:36:02 kafka | broker.id = 1 18:36:02 kafka | broker.id.generation.enable = true 18:36:02 kafka | broker.rack = null 18:36:02 kafka | broker.session.timeout.ms = 9000 18:36:02 kafka | client.quota.callback.class = null 18:36:02 kafka | compression.type = producer 18:36:02 kafka | connection.failed.authentication.delay.ms = 100 18:36:02 kafka | connections.max.idle.ms = 600000 18:36:02 kafka | connections.max.reauth.ms = 0 18:36:02 kafka | control.plane.listener.name = null 18:36:02 kafka | controlled.shutdown.enable = true 18:36:02 kafka | controlled.shutdown.max.retries = 3 18:36:02 kafka | controlled.shutdown.retry.backoff.ms = 5000 18:36:02 kafka | controller.listener.names = null 18:36:02 kafka | controller.quorum.append.linger.ms = 25 18:36:02 kafka | controller.quorum.election.backoff.max.ms = 1000 18:36:02 kafka | controller.quorum.election.timeout.ms = 1000 18:36:02 kafka | controller.quorum.fetch.timeout.ms = 2000 18:36:02 kafka | controller.quorum.request.timeout.ms = 2000 18:36:02 kafka | controller.quorum.retry.backoff.ms = 20 18:36:02 kafka | controller.quorum.voters = [] 18:36:02 kafka | controller.quota.window.num = 11 18:36:02 kafka | controller.quota.window.size.seconds = 1 18:36:02 kafka | controller.socket.timeout.ms = 30000 18:36:02 kafka | create.topic.policy.class.name = null 18:36:02 kafka | default.replication.factor = 1 18:36:02 kafka | delegation.token.expiry.check.interval.ms = 3600000 18:36:02 kafka | delegation.token.expiry.time.ms = 86400000 18:36:02 kafka | delegation.token.master.key = null 18:36:02 kafka | delegation.token.max.lifetime.ms = 604800000 18:36:02 kafka | delegation.token.secret.key = null 18:36:02 kafka | delete.records.purgatory.purge.interval.requests = 1 18:36:02 kafka | delete.topic.enable = true 18:36:02 kafka | early.start.listeners = null 18:36:02 kafka | fetch.max.bytes = 57671680 18:36:02 kafka | fetch.purgatory.purge.interval.requests = 1000 18:36:02 kafka | group.initial.rebalance.delay.ms = 3000 18:36:02 kafka | group.max.session.timeout.ms = 1800000 18:36:02 kafka | group.max.size = 2147483647 18:36:02 kafka | group.min.session.timeout.ms = 6000 18:36:02 kafka | initial.broker.registration.timeout.ms = 60000 18:36:02 kafka | inter.broker.listener.name = PLAINTEXT 18:36:02 kafka | inter.broker.protocol.version = 3.4-IV0 18:36:02 kafka | kafka.metrics.polling.interval.secs = 10 18:36:02 kafka | kafka.metrics.reporters = [] 18:36:02 kafka | leader.imbalance.check.interval.seconds = 300 18:36:02 kafka | leader.imbalance.per.broker.percentage = 10 18:36:02 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 18:36:02 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 18:36:02 kafka | log.cleaner.backoff.ms = 15000 18:36:02 kafka | log.cleaner.dedupe.buffer.size = 134217728 18:36:02 kafka | log.cleaner.delete.retention.ms = 86400000 18:36:02 kafka | log.cleaner.enable = true 18:36:02 kafka | log.cleaner.io.buffer.load.factor = 0.9 18:36:02 kafka | log.cleaner.io.buffer.size = 524288 18:36:02 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 18:36:02 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 18:36:02 kafka | log.cleaner.min.cleanable.ratio = 0.5 18:36:02 kafka | log.cleaner.min.compaction.lag.ms = 0 18:36:02 kafka | log.cleaner.threads = 1 18:36:02 kafka | log.cleanup.policy = [delete] 18:36:02 kafka | log.dir = /tmp/kafka-logs 18:36:02 kafka | log.dirs = /var/lib/kafka/data 18:36:02 kafka | log.flush.interval.messages = 9223372036854775807 18:36:02 kafka | log.flush.interval.ms = null 18:36:02 kafka | log.flush.offset.checkpoint.interval.ms = 60000 18:36:02 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 18:36:02 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 18:36:02 kafka | log.index.interval.bytes = 4096 18:36:02 kafka | log.index.size.max.bytes = 10485760 18:36:02 kafka | log.message.downconversion.enable = true 18:36:02 kafka | log.message.format.version = 3.0-IV1 18:36:02 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 18:36:02 kafka | log.message.timestamp.type = CreateTime 18:36:02 kafka | log.preallocate = false 18:36:02 kafka | log.retention.bytes = -1 18:36:02 kafka | log.retention.check.interval.ms = 300000 18:36:02 kafka | log.retention.hours = 168 18:36:02 kafka | log.retention.minutes = null 18:36:02 kafka | log.retention.ms = null 18:36:02 kafka | log.roll.hours = 168 18:36:02 kafka | log.roll.jitter.hours = 0 18:36:02 kafka | log.roll.jitter.ms = null 18:36:02 kafka | log.roll.ms = null 18:36:02 kafka | log.segment.bytes = 1073741824 18:36:02 kafka | log.segment.delete.delay.ms = 60000 18:36:02 kafka | max.connection.creation.rate = 2147483647 18:36:02 kafka | max.connections = 2147483647 18:36:02 kafka | max.connections.per.ip = 2147483647 18:36:02 kafka | max.connections.per.ip.overrides = 18:36:02 kafka | max.incremental.fetch.session.cache.slots = 1000 18:36:02 kafka | message.max.bytes = 1048588 18:36:02 kafka | metadata.log.dir = null 18:36:02 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 18:36:02 kafka | metadata.log.max.snapshot.interval.ms = 3600000 18:36:02 kafka | metadata.log.segment.bytes = 1073741824 18:36:02 kafka | metadata.log.segment.min.bytes = 8388608 18:36:02 kafka | metadata.log.segment.ms = 604800000 18:36:02 kafka | metadata.max.idle.interval.ms = 500 18:36:02 kafka | metadata.max.retention.bytes = 104857600 18:36:02 kafka | metadata.max.retention.ms = 604800000 18:36:02 kafka | metric.reporters = [] 18:36:02 kafka | metrics.num.samples = 2 18:36:02 kafka | metrics.recording.level = INFO 18:36:02 kafka | metrics.sample.window.ms = 30000 18:36:02 kafka | min.insync.replicas = 1 18:36:02 kafka | node.id = 1 18:36:02 kafka | num.io.threads = 8 18:36:02 kafka | num.network.threads = 3 18:36:02 kafka | num.partitions = 1 18:36:02 kafka | num.recovery.threads.per.data.dir = 1 18:36:02 kafka | num.replica.alter.log.dirs.threads = null 18:36:02 kafka | num.replica.fetchers = 1 18:36:02 kafka | offset.metadata.max.bytes = 4096 18:36:02 kafka | offsets.commit.required.acks = -1 18:36:02 kafka | offsets.commit.timeout.ms = 5000 18:36:02 kafka | offsets.load.buffer.size = 5242880 18:36:02 kafka | offsets.retention.check.interval.ms = 600000 18:36:02 kafka | offsets.retention.minutes = 10080 18:36:02 kafka | offsets.topic.compression.codec = 0 18:36:02 kafka | offsets.topic.num.partitions = 50 18:36:02 kafka | offsets.topic.replication.factor = 1 18:36:02 kafka | offsets.topic.segment.bytes = 104857600 18:36:02 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 18:36:02 kafka | password.encoder.iterations = 4096 18:36:02 kafka | password.encoder.key.length = 128 18:36:02 kafka | password.encoder.keyfactory.algorithm = null 18:36:02 kafka | password.encoder.old.secret = null 18:36:02 kafka | password.encoder.secret = null 18:36:02 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 18:36:02 kafka | process.roles = [] 18:36:02 kafka | producer.id.expiration.check.interval.ms = 600000 18:36:02 kafka | producer.id.expiration.ms = 86400000 18:36:02 kafka | producer.purgatory.purge.interval.requests = 1000 18:36:02 kafka | queued.max.request.bytes = -1 18:36:02 kafka | queued.max.requests = 500 18:36:02 kafka | quota.window.num = 11 18:36:02 kafka | quota.window.size.seconds = 1 18:36:02 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 18:36:02 kafka | remote.log.manager.task.interval.ms = 30000 18:36:02 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 18:36:02 kafka | remote.log.manager.task.retry.backoff.ms = 500 18:36:02 kafka | remote.log.manager.task.retry.jitter = 0.2 18:36:02 kafka | remote.log.manager.thread.pool.size = 10 18:36:02 kafka | remote.log.metadata.manager.class.name = null 18:36:02 kafka | remote.log.metadata.manager.class.path = null 18:36:02 kafka | remote.log.metadata.manager.impl.prefix = null 18:36:02 kafka | remote.log.metadata.manager.listener.name = null 18:36:02 kafka | remote.log.reader.max.pending.tasks = 100 18:36:02 kafka | remote.log.reader.threads = 10 18:36:02 kafka | remote.log.storage.manager.class.name = null 18:36:02 kafka | remote.log.storage.manager.class.path = null 18:36:02 kafka | remote.log.storage.manager.impl.prefix = null 18:36:02 kafka | remote.log.storage.system.enable = false 18:36:02 kafka | replica.fetch.backoff.ms = 1000 18:36:02 kafka | replica.fetch.max.bytes = 1048576 18:36:02 kafka | replica.fetch.min.bytes = 1 18:36:02 kafka | replica.fetch.response.max.bytes = 10485760 18:36:02 kafka | replica.fetch.wait.max.ms = 500 18:36:02 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 18:36:02 kafka | replica.lag.time.max.ms = 30000 18:36:02 kafka | replica.selector.class = null 18:36:02 kafka | replica.socket.receive.buffer.bytes = 65536 18:36:02 kafka | replica.socket.timeout.ms = 30000 18:36:02 kafka | replication.quota.window.num = 11 18:36:02 kafka | replication.quota.window.size.seconds = 1 18:36:02 kafka | request.timeout.ms = 30000 18:36:02 kafka | reserved.broker.max.id = 1000 18:36:02 kafka | sasl.client.callback.handler.class = null 18:36:02 kafka | sasl.enabled.mechanisms = [GSSAPI] 18:36:02 kafka | sasl.jaas.config = null 18:36:02 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:02 kafka | sasl.kerberos.min.time.before.relogin = 60000 18:36:02 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 18:36:02 kafka | sasl.kerberos.service.name = null 18:36:02 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:02 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:02 kafka | sasl.login.callback.handler.class = null 18:36:02 kafka | sasl.login.class = null 18:36:02 kafka | sasl.login.connect.timeout.ms = null 18:36:02 kafka | sasl.login.read.timeout.ms = null 18:36:02 kafka | sasl.login.refresh.buffer.seconds = 300 18:36:02 kafka | sasl.login.refresh.min.period.seconds = 60 18:36:02 kafka | sasl.login.refresh.window.factor = 0.8 18:36:02 kafka | sasl.login.refresh.window.jitter = 0.05 18:36:02 kafka | sasl.login.retry.backoff.max.ms = 10000 18:36:02 kafka | sasl.login.retry.backoff.ms = 100 18:36:02 kafka | sasl.mechanism.controller.protocol = GSSAPI 18:36:02 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 18:36:02 kafka | sasl.oauthbearer.clock.skew.seconds = 30 18:36:02 kafka | sasl.oauthbearer.expected.audience = null 18:36:02 kafka | sasl.oauthbearer.expected.issuer = null 18:36:02 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:02 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:02 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:02 kafka | sasl.oauthbearer.jwks.endpoint.url = null 18:36:02 kafka | sasl.oauthbearer.scope.claim.name = scope 18:36:02 kafka | sasl.oauthbearer.sub.claim.name = sub 18:36:02 kafka | sasl.oauthbearer.token.endpoint.url = null 18:36:02 kafka | sasl.server.callback.handler.class = null 18:36:02 kafka | sasl.server.max.receive.size = 524288 18:36:02 kafka | security.inter.broker.protocol = PLAINTEXT 18:36:02 kafka | security.providers = null 18:36:02 kafka | socket.connection.setup.timeout.max.ms = 30000 18:36:02 kafka | socket.connection.setup.timeout.ms = 10000 18:36:02 kafka | socket.listen.backlog.size = 50 18:36:02 kafka | socket.receive.buffer.bytes = 102400 18:36:02 kafka | socket.request.max.bytes = 104857600 18:36:02 kafka | socket.send.buffer.bytes = 102400 18:36:02 kafka | ssl.cipher.suites = [] 18:36:02 kafka | ssl.client.auth = none 18:36:02 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:02 kafka | ssl.endpoint.identification.algorithm = https 18:36:02 kafka | ssl.engine.factory.class = null 18:36:02 kafka | ssl.key.password = null 18:36:02 kafka | ssl.keymanager.algorithm = SunX509 18:36:02 kafka | ssl.keystore.certificate.chain = null 18:36:02 kafka | ssl.keystore.key = null 18:36:02 kafka | ssl.keystore.location = null 18:36:02 kafka | ssl.keystore.password = null 18:36:02 kafka | ssl.keystore.type = JKS 18:36:02 kafka | ssl.principal.mapping.rules = DEFAULT 18:36:02 kafka | ssl.protocol = TLSv1.3 18:36:02 kafka | ssl.provider = null 18:36:02 kafka | ssl.secure.random.implementation = null 18:36:02 kafka | ssl.trustmanager.algorithm = PKIX 18:36:02 kafka | ssl.truststore.certificates = null 18:36:02 kafka | ssl.truststore.location = null 18:36:02 kafka | ssl.truststore.password = null 18:36:02 kafka | ssl.truststore.type = JKS 18:36:02 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 18:36:02 kafka | transaction.max.timeout.ms = 900000 18:36:02 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 18:36:02 kafka | transaction.state.log.load.buffer.size = 5242880 18:36:02 kafka | transaction.state.log.min.isr = 2 18:36:02 kafka | transaction.state.log.num.partitions = 50 18:36:02 kafka | transaction.state.log.replication.factor = 3 18:36:02 kafka | transaction.state.log.segment.bytes = 104857600 18:36:02 kafka | transactional.id.expiration.ms = 604800000 18:36:02 kafka | unclean.leader.election.enable = false 18:36:02 kafka | zookeeper.clientCnxnSocket = null 18:36:02 kafka | zookeeper.connect = zookeeper:2181 18:36:02 kafka | zookeeper.connection.timeout.ms = null 18:36:02 kafka | zookeeper.max.in.flight.requests = 10 18:36:02 kafka | zookeeper.metadata.migration.enable = false 18:36:02 kafka | zookeeper.session.timeout.ms = 18000 18:36:02 kafka | zookeeper.set.acl = false 18:36:02 kafka | zookeeper.ssl.cipher.suites = null 18:36:02 kafka | zookeeper.ssl.client.enable = false 18:36:02 kafka | zookeeper.ssl.crl.enable = false 18:36:02 kafka | zookeeper.ssl.enabled.protocols = null 18:36:02 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 18:36:02 kafka | zookeeper.ssl.keystore.location = null 18:36:02 kafka | zookeeper.ssl.keystore.password = null 18:36:02 kafka | zookeeper.ssl.keystore.type = null 18:36:02 kafka | zookeeper.ssl.ocsp.enable = false 18:36:02 kafka | zookeeper.ssl.protocol = TLSv1.2 18:36:02 kafka | zookeeper.ssl.truststore.location = null 18:36:02 kafka | zookeeper.ssl.truststore.password = null 18:36:02 kafka | zookeeper.ssl.truststore.type = null 18:36:02 kafka | (kafka.server.KafkaConfig) 18:36:02 kafka | [2025-06-16 18:32:42,290] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 18:36:02 kafka | [2025-06-16 18:32:42,296] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 18:36:02 kafka | [2025-06-16 18:32:42,295] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 18:36:02 kafka | [2025-06-16 18:32:42,291] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 18:36:02 kafka | [2025-06-16 18:32:42,333] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:32:42,335] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:32:42,349] INFO Loaded 0 logs in 16ms. (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:32:42,349] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:32:42,351] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:32:42,367] INFO Starting the log cleaner (kafka.log.LogCleaner) 18:36:02 kafka | [2025-06-16 18:32:42,410] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) 18:36:02 kafka | [2025-06-16 18:32:42,425] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 18:36:02 kafka | [2025-06-16 18:32:42,441] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 18:36:02 kafka | [2025-06-16 18:32:42,489] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) 18:36:02 kafka | [2025-06-16 18:32:42,844] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 18:36:02 kafka | [2025-06-16 18:32:42,851] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 18:36:02 kafka | [2025-06-16 18:32:42,882] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 18:36:02 kafka | [2025-06-16 18:32:42,883] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 18:36:02 kafka | [2025-06-16 18:32:42,883] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 18:36:02 kafka | [2025-06-16 18:32:42,887] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 18:36:02 kafka | [2025-06-16 18:32:42,892] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) 18:36:02 kafka | [2025-06-16 18:32:42,907] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:02 kafka | [2025-06-16 18:32:42,909] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:02 kafka | [2025-06-16 18:32:42,911] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:02 kafka | [2025-06-16 18:32:42,914] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:02 kafka | [2025-06-16 18:32:42,929] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 18:36:02 kafka | [2025-06-16 18:32:42,953] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 18:36:02 kafka | [2025-06-16 18:32:42,981] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750098762965,1750098762965,1,0,0,72057604452188161,258,0,27 18:36:02 kafka | (kafka.zk.KafkaZkClient) 18:36:02 kafka | [2025-06-16 18:32:42,982] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 18:36:02 kafka | [2025-06-16 18:32:43,034] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 18:36:02 kafka | [2025-06-16 18:32:43,040] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:02 kafka | [2025-06-16 18:32:43,045] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:02 kafka | [2025-06-16 18:32:43,046] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:02 kafka | [2025-06-16 18:32:43,059] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:32:43,079] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 18:36:02 kafka | [2025-06-16 18:32:43,084] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:32:43,090] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,094] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,098] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 18:36:02 kafka | [2025-06-16 18:32:43,101] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 18:36:02 kafka | [2025-06-16 18:32:43,109] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 18:36:02 kafka | [2025-06-16 18:32:43,126] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 18:36:02 kafka | [2025-06-16 18:32:43,142] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 18:36:02 kafka | [2025-06-16 18:32:43,142] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,148] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:02 kafka | [2025-06-16 18:32:43,153] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,156] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,158] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,166] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 18:36:02 kafka | [2025-06-16 18:32:43,174] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 18:36:02 kafka | [2025-06-16 18:32:43,175] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,180] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,194] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 18:36:02 kafka | [2025-06-16 18:32:43,200] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) 18:36:02 kafka | [2025-06-16 18:32:43,200] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) 18:36:02 kafka | [2025-06-16 18:32:43,200] INFO Kafka startTimeMs: 1750098763191 (org.apache.kafka.common.utils.AppInfoParser) 18:36:02 kafka | [2025-06-16 18:32:43,201] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 18:36:02 kafka | [2025-06-16 18:32:43,214] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 18:36:02 kafka | [2025-06-16 18:32:43,214] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,215] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,215] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,215] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,219] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,219] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,220] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,220] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 18:36:02 kafka | [2025-06-16 18:32:43,221] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,225] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:32:43,239] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 18:36:02 kafka | [2025-06-16 18:32:43,240] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 18:36:02 kafka | [2025-06-16 18:32:43,260] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 18:36:02 kafka | [2025-06-16 18:32:43,261] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 18:36:02 kafka | [2025-06-16 18:32:43,262] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 18:36:02 kafka | [2025-06-16 18:32:43,262] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 18:36:02 kafka | [2025-06-16 18:32:43,263] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 18:36:02 kafka | [2025-06-16 18:32:43,266] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 18:36:02 kafka | [2025-06-16 18:32:43,266] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,274] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,275] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,275] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,275] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,276] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,293] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:43,342] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:32:43,397] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 18:36:02 kafka | [2025-06-16 18:32:43,410] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 18:36:02 kafka | [2025-06-16 18:32:48,295] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:32:48,296] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:33:15,858] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:33:15,865] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:33:15,867] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 18:36:02 kafka | [2025-06-16 18:33:15,869] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 18:36:02 kafka | [2025-06-16 18:33:15,903] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(2hru3UDlRbuucjCtqV3rFg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(usxhw3kjTnCdSwJakDLH4w),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:33:15,904] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,915] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,916] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,916] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,916] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,916] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,917] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,918] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,919] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,919] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,926] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,928] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,929] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,929] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,929] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,929] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,929] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:15,929] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,065] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,065] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,065] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,065] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,065] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,066] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,077] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,077] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,077] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,077] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,077] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,077] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,078] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,079] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,080] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,081] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,081] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,081] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,081] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,081] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,081] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,082] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,086] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,089] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,090] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,090] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,093] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,094] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,094] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,095] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,096] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,097] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,098] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,099] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,099] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,099] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,099] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,099] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,133] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,134] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,135] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,136] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,137] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,137] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,137] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,137] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,138] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 18:36:02 kafka | [2025-06-16 18:33:16,139] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,203] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,216] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,218] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,219] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,220] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,239] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,240] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,240] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,240] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,240] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,249] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,249] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,249] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,249] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,249] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,259] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,260] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,260] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,260] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,260] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,271] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,272] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,272] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,272] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,272] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,280] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,281] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,281] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,281] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,281] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,289] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,290] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,290] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,290] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,290] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,298] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,299] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,299] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,299] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,299] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,307] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,307] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,307] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,307] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,308] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,317] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,317] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,317] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,317] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,317] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,325] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,326] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,326] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,326] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,327] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,337] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,338] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,338] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,338] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,338] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,347] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,348] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,348] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,348] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,348] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,359] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,363] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,363] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,363] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,363] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,401] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,409] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,409] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,409] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,409] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,418] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,419] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,420] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,421] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,421] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,430] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,431] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,431] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,431] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,431] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,438] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,439] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,439] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,439] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,440] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,447] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,448] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,448] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,448] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,448] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,460] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,461] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,461] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,461] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,461] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,469] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,470] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,470] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,470] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,470] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,479] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,479] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,479] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,479] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,480] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,490] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,491] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,491] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,491] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,491] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,499] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,499] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,499] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,499] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,500] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,513] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,514] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,514] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,514] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,515] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,527] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,528] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,528] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,528] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,528] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,534] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,535] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,535] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,535] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,535] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,542] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,542] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,542] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,542] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,543] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,551] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,552] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,552] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,552] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,553] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,560] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,560] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,560] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,560] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,560] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,567] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,568] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,568] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,568] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,568] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,575] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,575] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,575] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,575] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,575] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,583] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,583] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,583] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,583] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,584] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,590] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,590] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,591] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,591] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,591] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,599] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,600] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,600] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,600] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,600] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(2hru3UDlRbuucjCtqV3rFg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,608] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,609] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,609] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,609] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,609] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,616] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,616] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,616] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,616] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,617] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,623] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,624] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,624] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,624] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,624] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,631] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,633] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,633] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,633] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,633] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,644] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,645] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,645] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,645] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,645] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,653] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,654] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,654] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,654] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,654] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,661] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,662] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,662] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,662] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,662] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,671] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,671] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,671] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,671] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,671] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,677] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,678] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,678] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,678] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,678] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,685] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,686] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,686] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,686] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,686] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,694] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,694] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,694] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,694] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,695] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,701] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,702] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,702] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,702] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,702] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,708] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,709] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,709] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,709] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,709] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,716] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,717] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,717] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,717] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,717] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,728] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,728] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,728] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,728] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,728] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,735] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:16,736] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:16,736] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,736] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:16,736] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(usxhw3kjTnCdSwJakDLH4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,743] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,744] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,752] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,753] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,755] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:16,756] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,759] INFO [Broker id=1] Finished LeaderAndIsr request in 666ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,761] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,765] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=usxhw3kjTnCdSwJakDLH4w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=2hru3UDlRbuucjCtqV3rFg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,766] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,766] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,766] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,766] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,767] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,767] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,767] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,767] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,767] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,767] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,768] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,768] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,768] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,768] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,768] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,768] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,769] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,769] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,769] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,769] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,769] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,770] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,770] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,770] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,771] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,772] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,772] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,772] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,772] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,772] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,773] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,773] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,773] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,773] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,774] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,774] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,774] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,774] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,774] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,775] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,775] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,775] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,775] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,775] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,776] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,777] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,777] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,777] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,777] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,777] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,778] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:16,778] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:16,778] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:02 kafka | [2025-06-16 18:33:17,354] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:17,371] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9) (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:17,687] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 78a84e9c-9f41-4395-81a2-9a0b7c619942 in Empty state. Created a new member id consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:17,690] INFO [GroupCoordinator 1]: Preparing to rebalance group 78a84e9c-9f41-4395-81a2-9a0b7c619942 in state PreparingRebalance with old generation 0 (__consumer_offsets-49) (reason: Adding new member consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0 with group instance id None; client reason: need to re-join with the given member-id: consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0) (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:17,795] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 183ef33a-1420-47be-a802-23c79d9c9b0a in Empty state. Created a new member id consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:17,800] INFO [GroupCoordinator 1]: Preparing to rebalance group 183ef33a-1420-47be-a802-23c79d9c9b0a in state PreparingRebalance with old generation 0 (__consumer_offsets-34) (reason: Adding new member consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf with group instance id None; client reason: need to re-join with the given member-id: consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf) (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:20,383] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:20,412] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:20,691] INFO [GroupCoordinator 1]: Stabilized group 78a84e9c-9f41-4395-81a2-9a0b7c619942 generation 1 (__consumer_offsets-49) with 1 members (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:20,697] INFO [GroupCoordinator 1]: Assignment received from leader consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0 for group 78a84e9c-9f41-4395-81a2-9a0b7c619942 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:20,801] INFO [GroupCoordinator 1]: Stabilized group 183ef33a-1420-47be-a802-23c79d9c9b0a generation 1 (__consumer_offsets-34) with 1 members (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:20,818] INFO [GroupCoordinator 1]: Assignment received from leader consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf for group 183ef33a-1420-47be-a802-23c79d9c9b0a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:33:22,794] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 18:36:02 kafka | [2025-06-16 18:33:22,812] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(Nwn68w-mQdek3Bxx6PjCxw),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:33:22,812] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) 18:36:02 kafka | [2025-06-16 18:33:22,812] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,812] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,812] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,812] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,818] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,818] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,818] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,818] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,818] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,819] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 1 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,819] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,820] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,826] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) 18:36:02 kafka | [2025-06-16 18:33:22,826] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,830] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:02 kafka | [2025-06-16 18:33:22,832] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) 18:36:02 kafka | [2025-06-16 18:33:22,833] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:22,833] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) 18:36:02 kafka | [2025-06-16 18:33:22,833] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(Nwn68w-mQdek3Bxx6PjCxw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,836] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,837] INFO [Broker id=1] Finished LeaderAndIsr request in 18ms correlationId 3 from controller 1 for 1 partitions (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,838] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Nwn68w-mQdek3Bxx6PjCxw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,839] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:02 kafka | [2025-06-16 18:33:22,841] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 18:36:02 kafka | [2025-06-16 18:34:56,310] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-aefb4a2a-e212-41cc-907d-9c7f686b26b8 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:34:56,312] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-aefb4a2a-e212-41cc-907d-9c7f686b26b8 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:34:59,313] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:34:59,316] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-aefb4a2a-e212-41cc-907d-9c7f686b26b8 for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:34:59,436] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-aefb4a2a-e212-41cc-907d-9c7f686b26b8 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:34:59,437] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 18:36:02 kafka | [2025-06-16 18:34:59,439] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-aefb4a2a-e212-41cc-907d-9c7f686b26b8, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 18:36:02 policy-api | Waiting for policy-db-migrator port 6824... 18:36:02 policy-api | policy-db-migrator (172.17.0.6:6824) open 18:36:02 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 18:36:02 policy-api | 18:36:02 policy-api | . ____ _ __ _ _ 18:36:02 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 18:36:02 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 18:36:02 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 18:36:02 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 18:36:02 policy-api | =========|_|==============|___/=/_/_/_/ 18:36:02 policy-api | 18:36:02 policy-api | :: Spring Boot :: (v3.4.6) 18:36:02 policy-api | 18:36:02 policy-api | [2025-06-16T18:32:55.368+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final 18:36:02 policy-api | [2025-06-16T18:32:55.468+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 37 (/app/api.jar started by policy in /opt/app/policy/api/bin) 18:36:02 policy-api | [2025-06-16T18:32:55.469+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" 18:36:02 policy-api | [2025-06-16T18:32:56.855+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 18:36:02 policy-api | [2025-06-16T18:32:57.031+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 164 ms. Found 6 JPA repository interfaces. 18:36:02 policy-api | [2025-06-16T18:32:57.701+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 18:36:02 policy-api | [2025-06-16T18:32:57.719+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 18:36:02 policy-api | [2025-06-16T18:32:57.725+00:00|INFO|StandardService|main] Starting service [Tomcat] 18:36:02 policy-api | [2025-06-16T18:32:57.725+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 18:36:02 policy-api | [2025-06-16T18:32:57.765+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 18:36:02 policy-api | [2025-06-16T18:32:57.766+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2236 ms 18:36:02 policy-api | [2025-06-16T18:32:58.088+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 18:36:02 policy-api | [2025-06-16T18:32:58.170+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 18:36:02 policy-api | [2025-06-16T18:32:58.218+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 18:36:02 policy-api | [2025-06-16T18:32:58.622+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 18:36:02 policy-api | [2025-06-16T18:32:58.660+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 18:36:02 policy-api | [2025-06-16T18:32:58.860+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@5ba36c83 18:36:02 policy-api | [2025-06-16T18:32:58.861+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 18:36:02 policy-api | [2025-06-16T18:32:58.945+00:00|INFO|pooling|main] HHH10001005: Database info: 18:36:02 policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 18:36:02 policy-api | Database driver: undefined/unknown 18:36:02 policy-api | Database version: 16.4 18:36:02 policy-api | Autocommit mode: undefined/unknown 18:36:02 policy-api | Isolation level: undefined/unknown 18:36:02 policy-api | Minimum pool size: undefined/unknown 18:36:02 policy-api | Maximum pool size: undefined/unknown 18:36:02 policy-api | [2025-06-16T18:33:00.840+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 18:36:02 policy-api | [2025-06-16T18:33:00.843+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 18:36:02 policy-api | [2025-06-16T18:33:01.455+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 18:36:02 policy-api | [2025-06-16T18:33:02.300+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 18:36:02 policy-api | [2025-06-16T18:33:03.306+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 18:36:02 policy-api | [2025-06-16T18:33:03.354+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 18:36:02 policy-api | [2025-06-16T18:33:03.989+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 18:36:02 policy-api | [2025-06-16T18:33:04.121+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 18:36:02 policy-api | [2025-06-16T18:33:04.139+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' 18:36:02 policy-api | [2025-06-16T18:33:04.161+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.519 seconds (process running for 10.087) 18:36:02 policy-api | [2025-06-16T18:33:39.917+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 18:36:02 policy-api | [2025-06-16T18:33:39.917+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 18:36:02 policy-api | [2025-06-16T18:33:39.919+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 18:36:02 policy-api | [2025-06-16T18:34:31.975+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 18:36:02 policy-api | [] 18:36:03 policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot 18:36:03 policy-csit | Run Robot test 18:36:03 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 18:36:03 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 18:36:03 policy-csit | -v POLICY_API_IP:policy-api:6969 18:36:03 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 18:36:03 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 18:36:03 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 18:36:03 policy-csit | -v APEX_IP:policy-apex-pdp:6969 18:36:03 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 18:36:03 policy-csit | -v KAFKA_IP:kafka:9092 18:36:03 policy-csit | -v PROMETHEUS_IP:prometheus:9090 18:36:03 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 18:36:03 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 18:36:03 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 18:36:03 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 18:36:03 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 18:36:03 policy-csit | -v TEMP_FOLDER:/tmp/distribution 18:36:03 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 18:36:03 policy-csit | -v TEST_ENV:docker 18:36:03 policy-csit | -v JAEGER_IP:jaeger:16686 18:36:03 policy-csit | Starting Robot test suites ... 18:36:03 policy-csit | ============================================================================== 18:36:03 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas 18:36:03 policy-csit | ============================================================================== 18:36:03 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test 18:36:03 policy-csit | ============================================================================== 18:36:03 policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | 18:36:03 policy-csit | ------------------------------------------------------------------------------ 18:36:03 policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | 18:36:03 policy-csit | ------------------------------------------------------------------------------ 18:36:03 policy-csit | MakeTopics :: Creates the Policy topics | PASS | 18:36:03 policy-csit | ------------------------------------------------------------------------------ 18:36:03 policy-csit | ExecuteXacmlPolicy | PASS | 18:36:03 policy-csit | ------------------------------------------------------------------------------ 18:36:03 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | 18:36:03 policy-csit | 4 tests, 4 passed, 0 failed 18:36:03 policy-csit | ============================================================================== 18:36:03 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas 18:36:03 policy-csit | ============================================================================== 18:36:03 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 18:36:03 policy-csit | ------------------------------------------------------------------------------ 18:36:03 policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | 18:36:03 policy-csit | ------------------------------------------------------------------------------ 18:36:03 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | 18:36:03 policy-csit | 2 tests, 2 passed, 0 failed 18:36:03 policy-csit | ============================================================================== 18:36:03 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | 18:36:03 policy-csit | 6 tests, 6 passed, 0 failed 18:36:03 policy-csit | ============================================================================== 18:36:03 policy-csit | Output: /tmp/results/output.xml 18:36:03 policy-csit | Log: /tmp/results/log.html 18:36:03 policy-csit | Report: /tmp/results/report.html 18:36:03 policy-csit | RESULT: 0 18:36:03 policy-db-migrator | Waiting for postgres port 5432... 18:36:03 policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused 18:36:03 policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused 18:36:03 policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused 18:36:03 policy-db-migrator | Connection to postgres (172.17.0.3) 5432 port [tcp/postgresql] succeeded! 18:36:03 policy-db-migrator | Initializing policyadmin... 18:36:03 policy-db-migrator | 321 blocks 18:36:03 policy-db-migrator | Preparing upgrade release version: 0800 18:36:03 policy-db-migrator | Preparing upgrade release version: 0900 18:36:03 policy-db-migrator | Preparing upgrade release version: 1000 18:36:03 policy-db-migrator | Preparing upgrade release version: 1100 18:36:03 policy-db-migrator | Preparing upgrade release version: 1200 18:36:03 policy-db-migrator | Preparing upgrade release version: 1300 18:36:03 policy-db-migrator | Done 18:36:03 policy-db-migrator | List of databases 18:36:03 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:03 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:03 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:03 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | (9 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | name | version 18:36:03 policy-db-migrator | -------------+--------- 18:36:03 policy-db-migrator | policyadmin | 0 18:36:03 policy-db-migrator | (1 row) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:03 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 18:36:03 policy-db-migrator | (0 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 18:36:03 policy-db-migrator | List of databases 18:36:03 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:03 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:03 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:03 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | (9 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:03 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | upgrade: 0 -> 1300 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0450-pdpgroup.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0470-pdp.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0570-toscadatatype.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0630-toscanodetype.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0660-toscaparameter.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0670-toscapolicies.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0690-toscapolicy.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0730-toscaproperty.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0770-toscarequirement.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0780-toscarequirements.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0820-toscatrigger.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-pdp.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0210-sequence.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0220-sequence.sql 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0120-toscatrigger.sql 18:36:03 policy-db-migrator | DROP TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0140-toscaparameter.sql 18:36:03 policy-db-migrator | DROP TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0150-toscaproperty.sql 18:36:03 policy-db-migrator | DROP TABLE 18:36:03 policy-db-migrator | DROP TABLE 18:36:03 policy-db-migrator | DROP TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-upgrade.sql 18:36:03 policy-db-migrator | msg 18:36:03 policy-db-migrator | --------------------------- 18:36:03 policy-db-migrator | upgrade to 1100 completed 18:36:03 policy-db-migrator | (1 row) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 18:36:03 policy-db-migrator | DROP INDEX 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0120-audit_sequence.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 18:36:03 policy-db-migrator | DROP TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 18:36:03 policy-db-migrator | DROP TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 18:36:03 policy-db-migrator | DROP TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | policyadmin: OK: upgrade (1300) 18:36:03 policy-db-migrator | List of databases 18:36:03 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:03 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:03 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:03 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | (9 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | name | version 18:36:03 policy-db-migrator | -------------+--------- 18:36:03 policy-db-migrator | policyadmin | 1300 18:36:03 policy-db-migrator | (1 row) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:03 policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 18:36:03 policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:42.729788 18:36:03 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:42.778712 18:36:03 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:42.835044 18:36:03 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:42.881425 18:36:03 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:42.931402 18:36:03 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:42.982311 18:36:03 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.032239 18:36:03 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.100787 18:36:03 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.144362 18:36:03 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.19696 18:36:03 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.241597 18:36:03 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.285381 18:36:03 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.328382 18:36:03 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.37513 18:36:03 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.421232 18:36:03 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.483944 18:36:03 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.53237 18:36:03 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.582542 18:36:03 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.631682 18:36:03 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.685418 18:36:03 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.731343 18:36:03 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.773959 18:36:03 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.824842 18:36:03 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.874134 18:36:03 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.91748 18:36:03 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:43.968504 18:36:03 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.013279 18:36:03 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.064558 18:36:03 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.10737 18:36:03 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.16687 18:36:03 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.218019 18:36:03 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.273116 18:36:03 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.321444 18:36:03 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.373385 18:36:03 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.426817 18:36:03 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.474207 18:36:03 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.547337 18:36:03 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.598328 18:36:03 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.649959 18:36:03 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.705655 18:36:03 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.755656 18:36:03 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.804441 18:36:03 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.872891 18:36:03 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.926318 18:36:03 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:44.982289 18:36:03 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.034127 18:36:03 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.083713 18:36:03 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.138754 18:36:03 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.233416 18:36:03 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.291964 18:36:03 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.351934 18:36:03 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.399957 18:36:03 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.452992 18:36:03 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.503087 18:36:03 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.556646 18:36:03 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.628047 18:36:03 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.679306 18:36:03 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.729551 18:36:03 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.778622 18:36:03 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.831427 18:36:03 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.88747 18:36:03 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:45.953201 18:36:03 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.001404 18:36:03 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.055133 18:36:03 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.116947 18:36:03 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.168174 18:36:03 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.217734 18:36:03 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.301361 18:36:03 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.351572 18:36:03 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.398258 18:36:03 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.447654 18:36:03 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.500791 18:36:03 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.558256 18:36:03 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.60622 18:36:03 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.693882 18:36:03 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.744883 18:36:03 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.797301 18:36:03 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.847832 18:36:03 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.910437 18:36:03 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:46.9626 18:36:03 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.037295 18:36:03 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.093457 18:36:03 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.14473 18:36:03 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.200938 18:36:03 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.253472 18:36:03 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.308058 18:36:03 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.418434 18:36:03 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.470413 18:36:03 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.522861 18:36:03 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.582588 18:36:03 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.639163 18:36:03 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.690628 18:36:03 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.771759 18:36:03 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.820117 18:36:03 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.870217 18:36:03 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606251832420800u | 1 | 2025-06-16 18:32:47.921619 18:36:03 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:47.969917 18:36:03 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.026913 18:36:03 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.117272 18:36:03 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.170948 18:36:03 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.238929 18:36:03 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.298754 18:36:03 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.358109 18:36:03 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.412303 18:36:03 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.547342 18:36:03 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.607863 18:36:03 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.667425 18:36:03 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.727661 18:36:03 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1606251832420900u | 1 | 2025-06-16 18:32:48.791013 18:36:03 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:48.841733 18:36:03 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:48.929518 18:36:03 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:48.982024 18:36:03 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:49.039596 18:36:03 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:49.089575 18:36:03 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:49.143935 18:36:03 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:49.197087 18:36:03 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:49.286823 18:36:03 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1606251832421000u | 1 | 2025-06-16 18:32:49.338091 18:36:03 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1606251832421100u | 1 | 2025-06-16 18:32:49.385878 18:36:03 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1606251832421200u | 1 | 2025-06-16 18:32:49.435792 18:36:03 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1606251832421200u | 1 | 2025-06-16 18:32:49.489592 18:36:03 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1606251832421200u | 1 | 2025-06-16 18:32:49.542731 18:36:03 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1606251832421200u | 1 | 2025-06-16 18:32:49.605941 18:36:03 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1606251832421300u | 1 | 2025-06-16 18:32:49.65723 18:36:03 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1606251832421300u | 1 | 2025-06-16 18:32:49.709884 18:36:03 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1606251832421300u | 1 | 2025-06-16 18:32:49.759399 18:36:03 policy-db-migrator | (126 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | policyadmin: OK @ 1300 18:36:03 policy-db-migrator | Initializing clampacm... 18:36:03 policy-db-migrator | 97 blocks 18:36:03 policy-db-migrator | Preparing upgrade release version: 1400 18:36:03 policy-db-migrator | Preparing upgrade release version: 1500 18:36:03 policy-db-migrator | Preparing upgrade release version: 1600 18:36:03 policy-db-migrator | Preparing upgrade release version: 1601 18:36:03 policy-db-migrator | Preparing upgrade release version: 1700 18:36:03 policy-db-migrator | Preparing upgrade release version: 1701 18:36:03 policy-db-migrator | Done 18:36:03 policy-db-migrator | List of databases 18:36:03 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:03 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:03 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:03 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | (9 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | name | version 18:36:03 policy-db-migrator | ----------+--------- 18:36:03 policy-db-migrator | clampacm | 0 18:36:03 policy-db-migrator | (1 row) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:03 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 18:36:03 policy-db-migrator | (0 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | clampacm: upgrade available: 0 -> 1701 18:36:03 policy-db-migrator | List of databases 18:36:03 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:03 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:03 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:03 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | (9 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:03 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | upgrade: 0 -> 1701 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-automationcomposition.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0400-nodetemplatestate.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0500-participant.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0600-participantsupportedelements.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-automationcomposition.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0300-participantreplica.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0400-participant.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0600-participant_replica_fk.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0700-automationcompositionelement.sql 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0800-nodetemplatestate.sql 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-automationcomposition.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-automationcomposition.sql 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-message.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0200-messagejob.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0200-automationcomposition.sql 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0600-nodetemplatestate.sql 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0800-participantreplica.sql 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | UPDATE 0 18:36:03 policy-db-migrator | ALTER TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | clampacm: OK: upgrade (1701) 18:36:03 policy-db-migrator | List of databases 18:36:03 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:03 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:03 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:03 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | (9 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | name | version 18:36:03 policy-db-migrator | ----------+--------- 18:36:03 policy-db-migrator | clampacm | 1701 18:36:03 policy-db-migrator | (1 row) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:03 policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 18:36:03 policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.420857 18:36:03 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.477401 18:36:03 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.536619 18:36:03 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.594332 18:36:03 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.667999 18:36:03 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.723769 18:36:03 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.773141 18:36:03 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.829067 18:36:03 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.87523 18:36:03 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:50.930067 18:36:03 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:51.012741 18:36:03 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:51.057869 18:36:03 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1606251832501400u | 1 | 2025-06-16 18:32:51.109919 18:36:03 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.158143 18:36:03 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.2067 18:36:03 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.269058 18:36:03 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.319714 18:36:03 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.398322 18:36:03 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.444804 18:36:03 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.490047 18:36:03 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1606251832501500u | 1 | 2025-06-16 18:32:51.536726 18:36:03 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1606251832501600u | 1 | 2025-06-16 18:32:51.589523 18:36:03 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1606251832501600u | 1 | 2025-06-16 18:32:51.638528 18:36:03 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1606251832501601u | 1 | 2025-06-16 18:32:51.688593 18:36:03 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1606251832501601u | 1 | 2025-06-16 18:32:51.736929 18:36:03 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1606251832501700u | 1 | 2025-06-16 18:32:51.792273 18:36:03 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1606251832501700u | 1 | 2025-06-16 18:32:51.846888 18:36:03 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1606251832501700u | 1 | 2025-06-16 18:32:51.898778 18:36:03 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:51.950259 18:36:03 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.000805 18:36:03 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.059334 18:36:03 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.110635 18:36:03 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.15733 18:36:03 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.199257 18:36:03 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.244132 18:36:03 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.287551 18:36:03 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1606251832501701u | 1 | 2025-06-16 18:32:52.332469 18:36:03 policy-db-migrator | (37 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | clampacm: OK @ 1701 18:36:03 policy-db-migrator | Initializing pooling... 18:36:03 policy-db-migrator | 4 blocks 18:36:03 policy-db-migrator | Preparing upgrade release version: 1600 18:36:03 policy-db-migrator | Done 18:36:03 policy-db-migrator | List of databases 18:36:03 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:03 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:03 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:03 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | (9 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | name | version 18:36:03 policy-db-migrator | ---------+--------- 18:36:03 policy-db-migrator | pooling | 0 18:36:03 policy-db-migrator | (1 row) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:03 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 18:36:03 policy-db-migrator | (0 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | pooling: upgrade available: 0 -> 1600 18:36:03 policy-db-migrator | List of databases 18:36:03 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:03 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:03 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:03 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | (9 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 18:36:03 policy-db-migrator | upgrade: 0 -> 1600 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-distributed.locking.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | pooling: OK: upgrade (1600) 18:36:03 policy-db-migrator | List of databases 18:36:03 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:03 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:03 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:03 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | (9 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | name | version 18:36:03 policy-db-migrator | ---------+--------- 18:36:03 policy-db-migrator | pooling | 1600 18:36:03 policy-db-migrator | (1 row) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:03 policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 18:36:03 policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1606251832521600u | 1 | 2025-06-16 18:32:52.976747 18:36:03 policy-db-migrator | (1 row) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | pooling: OK @ 1600 18:36:03 policy-db-migrator | Initializing operationshistory... 18:36:03 policy-db-migrator | 6 blocks 18:36:03 policy-db-migrator | Preparing upgrade release version: 1600 18:36:03 policy-db-migrator | Done 18:36:03 policy-db-migrator | List of databases 18:36:03 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:03 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:03 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:03 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | (9 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | name | version 18:36:03 policy-db-migrator | -------------------+--------- 18:36:03 policy-db-migrator | operationshistory | 0 18:36:03 policy-db-migrator | (1 row) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:03 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 18:36:03 policy-db-migrator | (0 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 18:36:03 policy-db-migrator | List of databases 18:36:03 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:03 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:03 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:03 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | (9 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:03 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | upgrade: 0 -> 1600 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | rc=0 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | > upgrade 0110-operationshistory.sql 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | CREATE INDEX 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | INSERT 0 1 18:36:03 policy-db-migrator | operationshistory: OK: upgrade (1600) 18:36:03 policy-db-migrator | List of databases 18:36:03 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:03 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:03 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:03 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:03 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:03 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:03 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:03 policy-db-migrator | (9 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:03 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 18:36:03 policy-db-migrator | CREATE TABLE 18:36:03 policy-db-migrator | name | version 18:36:03 policy-db-migrator | -------------------+--------- 18:36:03 policy-db-migrator | operationshistory | 1600 18:36:03 policy-db-migrator | (1 row) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:03 policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 18:36:03 policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1606251832531600u | 1 | 2025-06-16 18:32:53.568179 18:36:03 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1606251832531600u | 1 | 2025-06-16 18:32:53.623566 18:36:03 policy-db-migrator | (2 rows) 18:36:03 policy-db-migrator | 18:36:03 policy-db-migrator | operationshistory: OK @ 1600 18:36:03 policy-pap | Waiting for api port 6969... 18:36:03 policy-pap | Waiting for kafka port 9092... 18:36:03 policy-pap | api (172.17.0.7:6969) open 18:36:03 policy-pap | kafka (172.17.0.5:9092) open 18:36:03 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 18:36:03 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 18:36:03 policy-pap | 18:36:03 policy-pap | . ____ _ __ _ _ 18:36:03 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 18:36:03 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 18:36:03 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 18:36:03 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 18:36:03 policy-pap | =========|_|==============|___/=/_/_/_/ 18:36:03 policy-pap | 18:36:03 policy-pap | :: Spring Boot :: (v3.4.6) 18:36:03 policy-pap | 18:36:03 policy-pap | [2025-06-16T18:33:06.713+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 59 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 18:36:03 policy-pap | [2025-06-16T18:33:06.714+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" 18:36:03 policy-pap | [2025-06-16T18:33:08.085+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 18:36:03 policy-pap | [2025-06-16T18:33:08.179+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 84 ms. Found 7 JPA repository interfaces. 18:36:03 policy-pap | [2025-06-16T18:33:09.080+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 18:36:03 policy-pap | [2025-06-16T18:33:09.093+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 18:36:03 policy-pap | [2025-06-16T18:33:09.095+00:00|INFO|StandardService|main] Starting service [Tomcat] 18:36:03 policy-pap | [2025-06-16T18:33:09.095+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 18:36:03 policy-pap | [2025-06-16T18:33:09.146+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 18:36:03 policy-pap | [2025-06-16T18:33:09.146+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2377 ms 18:36:03 policy-pap | [2025-06-16T18:33:09.565+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 18:36:03 policy-pap | [2025-06-16T18:33:09.642+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 18:36:03 policy-pap | [2025-06-16T18:33:09.699+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 18:36:03 policy-pap | [2025-06-16T18:33:10.127+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 18:36:03 policy-pap | [2025-06-16T18:33:10.175+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 18:36:03 policy-pap | [2025-06-16T18:33:10.385+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6e337ba1 18:36:03 policy-pap | [2025-06-16T18:33:10.387+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 18:36:03 policy-pap | [2025-06-16T18:33:10.492+00:00|INFO|pooling|main] HHH10001005: Database info: 18:36:03 policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 18:36:03 policy-pap | Database driver: undefined/unknown 18:36:03 policy-pap | Database version: 16.4 18:36:03 policy-pap | Autocommit mode: undefined/unknown 18:36:03 policy-pap | Isolation level: undefined/unknown 18:36:03 policy-pap | Minimum pool size: undefined/unknown 18:36:03 policy-pap | Maximum pool size: undefined/unknown 18:36:03 policy-pap | [2025-06-16T18:33:12.432+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 18:36:03 policy-pap | [2025-06-16T18:33:12.436+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 18:36:03 policy-pap | [2025-06-16T18:33:13.580+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 18:36:03 policy-pap | allow.auto.create.topics = true 18:36:03 policy-pap | auto.commit.interval.ms = 5000 18:36:03 policy-pap | auto.include.jmx.reporter = true 18:36:03 policy-pap | auto.offset.reset = latest 18:36:03 policy-pap | bootstrap.servers = [kafka:9092] 18:36:03 policy-pap | check.crcs = true 18:36:03 policy-pap | client.dns.lookup = use_all_dns_ips 18:36:03 policy-pap | client.id = consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-1 18:36:03 policy-pap | client.rack = 18:36:03 policy-pap | connections.max.idle.ms = 540000 18:36:03 policy-pap | default.api.timeout.ms = 60000 18:36:03 policy-pap | enable.auto.commit = true 18:36:03 policy-pap | enable.metrics.push = true 18:36:03 policy-pap | exclude.internal.topics = true 18:36:03 policy-pap | fetch.max.bytes = 52428800 18:36:03 policy-pap | fetch.max.wait.ms = 500 18:36:03 policy-pap | fetch.min.bytes = 1 18:36:03 policy-pap | group.id = 78a84e9c-9f41-4395-81a2-9a0b7c619942 18:36:03 policy-pap | group.instance.id = null 18:36:03 policy-pap | group.protocol = classic 18:36:03 policy-pap | group.remote.assignor = null 18:36:03 policy-pap | heartbeat.interval.ms = 3000 18:36:03 policy-pap | interceptor.classes = [] 18:36:03 policy-pap | internal.leave.group.on.close = true 18:36:03 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 18:36:03 policy-pap | isolation.level = read_uncommitted 18:36:03 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:03 policy-pap | max.partition.fetch.bytes = 1048576 18:36:03 policy-pap | max.poll.interval.ms = 300000 18:36:03 policy-pap | max.poll.records = 500 18:36:03 policy-pap | metadata.max.age.ms = 300000 18:36:03 policy-pap | metadata.recovery.strategy = none 18:36:03 policy-pap | metric.reporters = [] 18:36:03 policy-pap | metrics.num.samples = 2 18:36:03 policy-pap | metrics.recording.level = INFO 18:36:03 policy-pap | metrics.sample.window.ms = 30000 18:36:03 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 18:36:03 policy-pap | receive.buffer.bytes = 65536 18:36:03 policy-pap | reconnect.backoff.max.ms = 1000 18:36:03 policy-pap | reconnect.backoff.ms = 50 18:36:03 policy-pap | request.timeout.ms = 30000 18:36:03 policy-pap | retry.backoff.max.ms = 1000 18:36:03 policy-pap | retry.backoff.ms = 100 18:36:03 policy-pap | sasl.client.callback.handler.class = null 18:36:03 policy-pap | sasl.jaas.config = null 18:36:03 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:03 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 18:36:03 policy-pap | sasl.kerberos.service.name = null 18:36:03 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:03 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:03 policy-pap | sasl.login.callback.handler.class = null 18:36:03 policy-pap | sasl.login.class = null 18:36:03 policy-pap | sasl.login.connect.timeout.ms = null 18:36:03 policy-pap | sasl.login.read.timeout.ms = null 18:36:03 policy-pap | sasl.login.refresh.buffer.seconds = 300 18:36:03 policy-pap | sasl.login.refresh.min.period.seconds = 60 18:36:03 policy-pap | sasl.login.refresh.window.factor = 0.8 18:36:03 policy-pap | sasl.login.refresh.window.jitter = 0.05 18:36:03 policy-pap | sasl.login.retry.backoff.max.ms = 10000 18:36:03 policy-pap | sasl.login.retry.backoff.ms = 100 18:36:03 policy-pap | sasl.mechanism = GSSAPI 18:36:03 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 18:36:03 policy-pap | sasl.oauthbearer.expected.audience = null 18:36:03 policy-pap | sasl.oauthbearer.expected.issuer = null 18:36:03 policy-pap | sasl.oauthbearer.header.urlencode = false 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 18:36:03 policy-pap | sasl.oauthbearer.scope.claim.name = scope 18:36:03 policy-pap | sasl.oauthbearer.sub.claim.name = sub 18:36:03 policy-pap | sasl.oauthbearer.token.endpoint.url = null 18:36:03 policy-pap | security.protocol = PLAINTEXT 18:36:03 policy-pap | security.providers = null 18:36:03 policy-pap | send.buffer.bytes = 131072 18:36:03 policy-pap | session.timeout.ms = 45000 18:36:03 policy-pap | socket.connection.setup.timeout.max.ms = 30000 18:36:03 policy-pap | socket.connection.setup.timeout.ms = 10000 18:36:03 policy-pap | ssl.cipher.suites = null 18:36:03 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:03 policy-pap | ssl.endpoint.identification.algorithm = https 18:36:03 policy-pap | ssl.engine.factory.class = null 18:36:03 policy-pap | ssl.key.password = null 18:36:03 policy-pap | ssl.keymanager.algorithm = SunX509 18:36:03 policy-pap | ssl.keystore.certificate.chain = null 18:36:03 policy-pap | ssl.keystore.key = null 18:36:03 policy-pap | ssl.keystore.location = null 18:36:03 policy-pap | ssl.keystore.password = null 18:36:03 policy-pap | ssl.keystore.type = JKS 18:36:03 policy-pap | ssl.protocol = TLSv1.3 18:36:03 policy-pap | ssl.provider = null 18:36:03 policy-pap | ssl.secure.random.implementation = null 18:36:03 policy-pap | ssl.trustmanager.algorithm = PKIX 18:36:03 policy-pap | ssl.truststore.certificates = null 18:36:03 policy-pap | ssl.truststore.location = null 18:36:03 policy-pap | ssl.truststore.password = null 18:36:03 policy-pap | ssl.truststore.type = JKS 18:36:03 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:03 policy-pap | 18:36:03 policy-pap | [2025-06-16T18:33:13.632+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:03 policy-pap | [2025-06-16T18:33:13.761+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:03 policy-pap | [2025-06-16T18:33:13.762+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:03 policy-pap | [2025-06-16T18:33:13.762+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098793760 18:36:03 policy-pap | [2025-06-16T18:33:13.764+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-1, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Subscribed to topic(s): policy-pdp-pap 18:36:03 policy-pap | [2025-06-16T18:33:13.764+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 18:36:03 policy-pap | allow.auto.create.topics = true 18:36:03 policy-pap | auto.commit.interval.ms = 5000 18:36:03 policy-pap | auto.include.jmx.reporter = true 18:36:03 policy-pap | auto.offset.reset = latest 18:36:03 policy-pap | bootstrap.servers = [kafka:9092] 18:36:03 policy-pap | check.crcs = true 18:36:03 policy-pap | client.dns.lookup = use_all_dns_ips 18:36:03 policy-pap | client.id = consumer-policy-pap-2 18:36:03 policy-pap | client.rack = 18:36:03 policy-pap | connections.max.idle.ms = 540000 18:36:03 policy-pap | default.api.timeout.ms = 60000 18:36:03 policy-pap | enable.auto.commit = true 18:36:03 policy-pap | enable.metrics.push = true 18:36:03 policy-pap | exclude.internal.topics = true 18:36:03 policy-pap | fetch.max.bytes = 52428800 18:36:03 policy-pap | fetch.max.wait.ms = 500 18:36:03 policy-pap | fetch.min.bytes = 1 18:36:03 policy-pap | group.id = policy-pap 18:36:03 policy-pap | group.instance.id = null 18:36:03 policy-pap | group.protocol = classic 18:36:03 policy-pap | group.remote.assignor = null 18:36:03 policy-pap | heartbeat.interval.ms = 3000 18:36:03 policy-pap | interceptor.classes = [] 18:36:03 policy-pap | internal.leave.group.on.close = true 18:36:03 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 18:36:03 policy-pap | isolation.level = read_uncommitted 18:36:03 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:03 policy-pap | max.partition.fetch.bytes = 1048576 18:36:03 policy-pap | max.poll.interval.ms = 300000 18:36:03 policy-pap | max.poll.records = 500 18:36:03 policy-pap | metadata.max.age.ms = 300000 18:36:03 policy-pap | metadata.recovery.strategy = none 18:36:03 policy-pap | metric.reporters = [] 18:36:03 policy-pap | metrics.num.samples = 2 18:36:03 policy-pap | metrics.recording.level = INFO 18:36:03 policy-pap | metrics.sample.window.ms = 30000 18:36:03 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 18:36:03 policy-pap | receive.buffer.bytes = 65536 18:36:03 policy-pap | reconnect.backoff.max.ms = 1000 18:36:03 policy-pap | reconnect.backoff.ms = 50 18:36:03 policy-pap | request.timeout.ms = 30000 18:36:03 policy-pap | retry.backoff.max.ms = 1000 18:36:03 policy-pap | retry.backoff.ms = 100 18:36:03 policy-pap | sasl.client.callback.handler.class = null 18:36:03 policy-pap | sasl.jaas.config = null 18:36:03 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:03 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 18:36:03 policy-pap | sasl.kerberos.service.name = null 18:36:03 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:03 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:03 policy-pap | sasl.login.callback.handler.class = null 18:36:03 policy-pap | sasl.login.class = null 18:36:03 policy-pap | sasl.login.connect.timeout.ms = null 18:36:03 policy-pap | sasl.login.read.timeout.ms = null 18:36:03 policy-pap | sasl.login.refresh.buffer.seconds = 300 18:36:03 policy-pap | sasl.login.refresh.min.period.seconds = 60 18:36:03 policy-pap | sasl.login.refresh.window.factor = 0.8 18:36:03 policy-pap | sasl.login.refresh.window.jitter = 0.05 18:36:03 policy-pap | sasl.login.retry.backoff.max.ms = 10000 18:36:03 policy-pap | sasl.login.retry.backoff.ms = 100 18:36:03 policy-pap | sasl.mechanism = GSSAPI 18:36:03 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 18:36:03 policy-pap | sasl.oauthbearer.expected.audience = null 18:36:03 policy-pap | sasl.oauthbearer.expected.issuer = null 18:36:03 policy-pap | sasl.oauthbearer.header.urlencode = false 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 18:36:03 policy-pap | sasl.oauthbearer.scope.claim.name = scope 18:36:03 policy-pap | sasl.oauthbearer.sub.claim.name = sub 18:36:03 policy-pap | sasl.oauthbearer.token.endpoint.url = null 18:36:03 policy-pap | security.protocol = PLAINTEXT 18:36:03 policy-pap | security.providers = null 18:36:03 policy-pap | send.buffer.bytes = 131072 18:36:03 policy-pap | session.timeout.ms = 45000 18:36:03 policy-pap | socket.connection.setup.timeout.max.ms = 30000 18:36:03 policy-pap | socket.connection.setup.timeout.ms = 10000 18:36:03 policy-pap | ssl.cipher.suites = null 18:36:03 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:03 policy-pap | ssl.endpoint.identification.algorithm = https 18:36:03 policy-pap | ssl.engine.factory.class = null 18:36:03 policy-pap | ssl.key.password = null 18:36:03 policy-pap | ssl.keymanager.algorithm = SunX509 18:36:03 policy-pap | ssl.keystore.certificate.chain = null 18:36:03 policy-pap | ssl.keystore.key = null 18:36:03 policy-pap | ssl.keystore.location = null 18:36:03 policy-pap | ssl.keystore.password = null 18:36:03 policy-pap | ssl.keystore.type = JKS 18:36:03 policy-pap | ssl.protocol = TLSv1.3 18:36:03 policy-pap | ssl.provider = null 18:36:03 policy-pap | ssl.secure.random.implementation = null 18:36:03 policy-pap | ssl.trustmanager.algorithm = PKIX 18:36:03 policy-pap | ssl.truststore.certificates = null 18:36:03 policy-pap | ssl.truststore.location = null 18:36:03 policy-pap | ssl.truststore.password = null 18:36:03 policy-pap | ssl.truststore.type = JKS 18:36:03 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:03 policy-pap | 18:36:03 policy-pap | [2025-06-16T18:33:13.765+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:03 policy-pap | [2025-06-16T18:33:13.772+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:03 policy-pap | [2025-06-16T18:33:13.772+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:03 policy-pap | [2025-06-16T18:33:13.772+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098793772 18:36:03 policy-pap | [2025-06-16T18:33:13.772+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 18:36:03 policy-pap | [2025-06-16T18:33:14.105+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=xacml, supportedPolicyTypes=[onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0, onap.policies.monitoring.* 1.0.0, onap.policies.optimization.* 1.0.0, onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0, onap.policies.native.Xacml 1.0.0, onap.policies.Naming 1.0.0, onap.policies.match.* 1.0.0], policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 18:36:03 policy-pap | [2025-06-16T18:33:14.219+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 18:36:03 policy-pap | [2025-06-16T18:33:14.291+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 18:36:03 policy-pap | [2025-06-16T18:33:14.529+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. 18:36:03 policy-pap | [2025-06-16T18:33:15.209+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 18:36:03 policy-pap | [2025-06-16T18:33:15.308+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 18:36:03 policy-pap | [2025-06-16T18:33:15.335+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' 18:36:03 policy-pap | [2025-06-16T18:33:15.355+00:00|INFO|ServiceManager|main] Policy PAP starting 18:36:03 policy-pap | [2025-06-16T18:33:15.355+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 18:36:03 policy-pap | [2025-06-16T18:33:15.356+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 18:36:03 policy-pap | [2025-06-16T18:33:15.356+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 18:36:03 policy-pap | [2025-06-16T18:33:15.356+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 18:36:03 policy-pap | [2025-06-16T18:33:15.357+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 18:36:03 policy-pap | [2025-06-16T18:33:15.357+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 18:36:03 policy-pap | [2025-06-16T18:33:15.358+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=78a84e9c-9f41-4395-81a2-9a0b7c619942, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2e7e9897 18:36:03 policy-pap | [2025-06-16T18:33:15.368+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=78a84e9c-9f41-4395-81a2-9a0b7c619942, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 18:36:03 policy-pap | [2025-06-16T18:33:15.369+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 18:36:03 policy-pap | allow.auto.create.topics = true 18:36:03 policy-pap | auto.commit.interval.ms = 5000 18:36:03 policy-pap | auto.include.jmx.reporter = true 18:36:03 policy-pap | auto.offset.reset = latest 18:36:03 policy-pap | bootstrap.servers = [kafka:9092] 18:36:03 policy-pap | check.crcs = true 18:36:03 policy-pap | client.dns.lookup = use_all_dns_ips 18:36:03 policy-pap | client.id = consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3 18:36:03 policy-pap | client.rack = 18:36:03 policy-pap | connections.max.idle.ms = 540000 18:36:03 policy-pap | default.api.timeout.ms = 60000 18:36:03 policy-pap | enable.auto.commit = true 18:36:03 policy-pap | enable.metrics.push = true 18:36:03 policy-pap | exclude.internal.topics = true 18:36:03 policy-pap | fetch.max.bytes = 52428800 18:36:03 policy-pap | fetch.max.wait.ms = 500 18:36:03 policy-pap | fetch.min.bytes = 1 18:36:03 policy-pap | group.id = 78a84e9c-9f41-4395-81a2-9a0b7c619942 18:36:03 policy-pap | group.instance.id = null 18:36:03 policy-pap | group.protocol = classic 18:36:03 policy-pap | group.remote.assignor = null 18:36:03 policy-pap | heartbeat.interval.ms = 3000 18:36:03 policy-pap | interceptor.classes = [] 18:36:03 policy-pap | internal.leave.group.on.close = true 18:36:03 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 18:36:03 policy-pap | isolation.level = read_uncommitted 18:36:03 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:03 policy-pap | max.partition.fetch.bytes = 1048576 18:36:03 policy-pap | max.poll.interval.ms = 300000 18:36:03 policy-pap | max.poll.records = 500 18:36:03 policy-pap | metadata.max.age.ms = 300000 18:36:03 policy-pap | metadata.recovery.strategy = none 18:36:03 policy-pap | metric.reporters = [] 18:36:03 policy-pap | metrics.num.samples = 2 18:36:03 policy-pap | metrics.recording.level = INFO 18:36:03 policy-pap | metrics.sample.window.ms = 30000 18:36:03 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 18:36:03 policy-pap | receive.buffer.bytes = 65536 18:36:03 policy-pap | reconnect.backoff.max.ms = 1000 18:36:03 policy-pap | reconnect.backoff.ms = 50 18:36:03 policy-pap | request.timeout.ms = 30000 18:36:03 policy-pap | retry.backoff.max.ms = 1000 18:36:03 policy-pap | retry.backoff.ms = 100 18:36:03 policy-pap | sasl.client.callback.handler.class = null 18:36:03 policy-pap | sasl.jaas.config = null 18:36:03 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:03 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 18:36:03 policy-pap | sasl.kerberos.service.name = null 18:36:03 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:03 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:03 policy-pap | sasl.login.callback.handler.class = null 18:36:03 policy-pap | sasl.login.class = null 18:36:03 policy-pap | sasl.login.connect.timeout.ms = null 18:36:03 policy-pap | sasl.login.read.timeout.ms = null 18:36:03 policy-pap | sasl.login.refresh.buffer.seconds = 300 18:36:03 policy-pap | sasl.login.refresh.min.period.seconds = 60 18:36:03 policy-pap | sasl.login.refresh.window.factor = 0.8 18:36:03 policy-pap | sasl.login.refresh.window.jitter = 0.05 18:36:03 policy-pap | sasl.login.retry.backoff.max.ms = 10000 18:36:03 policy-pap | sasl.login.retry.backoff.ms = 100 18:36:03 policy-pap | sasl.mechanism = GSSAPI 18:36:03 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 18:36:03 policy-pap | sasl.oauthbearer.expected.audience = null 18:36:03 policy-pap | sasl.oauthbearer.expected.issuer = null 18:36:03 policy-pap | sasl.oauthbearer.header.urlencode = false 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 18:36:03 policy-pap | sasl.oauthbearer.scope.claim.name = scope 18:36:03 policy-pap | sasl.oauthbearer.sub.claim.name = sub 18:36:03 policy-pap | sasl.oauthbearer.token.endpoint.url = null 18:36:03 policy-pap | security.protocol = PLAINTEXT 18:36:03 policy-pap | security.providers = null 18:36:03 policy-pap | send.buffer.bytes = 131072 18:36:03 policy-pap | session.timeout.ms = 45000 18:36:03 policy-pap | socket.connection.setup.timeout.max.ms = 30000 18:36:03 policy-pap | socket.connection.setup.timeout.ms = 10000 18:36:03 policy-pap | ssl.cipher.suites = null 18:36:03 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:03 policy-pap | ssl.endpoint.identification.algorithm = https 18:36:03 policy-pap | ssl.engine.factory.class = null 18:36:03 policy-pap | ssl.key.password = null 18:36:03 policy-pap | ssl.keymanager.algorithm = SunX509 18:36:03 policy-pap | ssl.keystore.certificate.chain = null 18:36:03 policy-pap | ssl.keystore.key = null 18:36:03 policy-pap | ssl.keystore.location = null 18:36:03 policy-pap | ssl.keystore.password = null 18:36:03 policy-pap | ssl.keystore.type = JKS 18:36:03 policy-pap | ssl.protocol = TLSv1.3 18:36:03 policy-pap | ssl.provider = null 18:36:03 policy-pap | ssl.secure.random.implementation = null 18:36:03 policy-pap | ssl.trustmanager.algorithm = PKIX 18:36:03 policy-pap | ssl.truststore.certificates = null 18:36:03 policy-pap | ssl.truststore.location = null 18:36:03 policy-pap | ssl.truststore.password = null 18:36:03 policy-pap | ssl.truststore.type = JKS 18:36:03 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:03 policy-pap | 18:36:03 policy-pap | [2025-06-16T18:33:15.369+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:03 policy-pap | [2025-06-16T18:33:15.376+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:03 policy-pap | [2025-06-16T18:33:15.376+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:03 policy-pap | [2025-06-16T18:33:15.376+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098795376 18:36:03 policy-pap | [2025-06-16T18:33:15.376+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Subscribed to topic(s): policy-pdp-pap 18:36:03 policy-pap | [2025-06-16T18:33:15.377+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 18:36:03 policy-pap | [2025-06-16T18:33:15.377+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=2ce83089-4029-40c4-8165-ced703c1674c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1a958d2a 18:36:03 policy-pap | [2025-06-16T18:33:15.377+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=2ce83089-4029-40c4-8165-ced703c1674c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 18:36:03 policy-pap | [2025-06-16T18:33:15.377+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 18:36:03 policy-pap | allow.auto.create.topics = true 18:36:03 policy-pap | auto.commit.interval.ms = 5000 18:36:03 policy-pap | auto.include.jmx.reporter = true 18:36:03 policy-pap | auto.offset.reset = latest 18:36:03 policy-pap | bootstrap.servers = [kafka:9092] 18:36:03 policy-pap | check.crcs = true 18:36:03 policy-pap | client.dns.lookup = use_all_dns_ips 18:36:03 policy-pap | client.id = consumer-policy-pap-4 18:36:03 policy-pap | client.rack = 18:36:03 policy-pap | connections.max.idle.ms = 540000 18:36:03 policy-pap | default.api.timeout.ms = 60000 18:36:03 policy-pap | enable.auto.commit = true 18:36:03 policy-pap | enable.metrics.push = true 18:36:03 policy-pap | exclude.internal.topics = true 18:36:03 policy-pap | fetch.max.bytes = 52428800 18:36:03 policy-pap | fetch.max.wait.ms = 500 18:36:03 policy-pap | fetch.min.bytes = 1 18:36:03 policy-pap | group.id = policy-pap 18:36:03 policy-pap | group.instance.id = null 18:36:03 policy-pap | group.protocol = classic 18:36:03 policy-pap | group.remote.assignor = null 18:36:03 policy-pap | heartbeat.interval.ms = 3000 18:36:03 policy-pap | interceptor.classes = [] 18:36:03 policy-pap | internal.leave.group.on.close = true 18:36:03 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 18:36:03 policy-pap | isolation.level = read_uncommitted 18:36:03 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:03 policy-pap | max.partition.fetch.bytes = 1048576 18:36:03 policy-pap | max.poll.interval.ms = 300000 18:36:03 policy-pap | max.poll.records = 500 18:36:03 policy-pap | metadata.max.age.ms = 300000 18:36:03 policy-pap | metadata.recovery.strategy = none 18:36:03 policy-pap | metric.reporters = [] 18:36:03 policy-pap | metrics.num.samples = 2 18:36:03 policy-pap | metrics.recording.level = INFO 18:36:03 policy-pap | metrics.sample.window.ms = 30000 18:36:03 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 18:36:03 policy-pap | receive.buffer.bytes = 65536 18:36:03 policy-pap | reconnect.backoff.max.ms = 1000 18:36:03 policy-pap | reconnect.backoff.ms = 50 18:36:03 policy-pap | request.timeout.ms = 30000 18:36:03 policy-pap | retry.backoff.max.ms = 1000 18:36:03 policy-pap | retry.backoff.ms = 100 18:36:03 policy-pap | sasl.client.callback.handler.class = null 18:36:03 policy-pap | sasl.jaas.config = null 18:36:03 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:03 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 18:36:03 policy-pap | sasl.kerberos.service.name = null 18:36:03 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:03 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:03 policy-pap | sasl.login.callback.handler.class = null 18:36:03 policy-pap | sasl.login.class = null 18:36:03 policy-pap | sasl.login.connect.timeout.ms = null 18:36:03 policy-pap | sasl.login.read.timeout.ms = null 18:36:03 policy-pap | sasl.login.refresh.buffer.seconds = 300 18:36:03 policy-pap | sasl.login.refresh.min.period.seconds = 60 18:36:03 policy-pap | sasl.login.refresh.window.factor = 0.8 18:36:03 policy-pap | sasl.login.refresh.window.jitter = 0.05 18:36:03 policy-pap | sasl.login.retry.backoff.max.ms = 10000 18:36:03 policy-pap | sasl.login.retry.backoff.ms = 100 18:36:03 policy-pap | sasl.mechanism = GSSAPI 18:36:03 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 18:36:03 policy-pap | sasl.oauthbearer.expected.audience = null 18:36:03 policy-pap | sasl.oauthbearer.expected.issuer = null 18:36:03 policy-pap | sasl.oauthbearer.header.urlencode = false 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 18:36:03 policy-pap | sasl.oauthbearer.scope.claim.name = scope 18:36:03 policy-pap | sasl.oauthbearer.sub.claim.name = sub 18:36:03 policy-pap | sasl.oauthbearer.token.endpoint.url = null 18:36:03 policy-pap | security.protocol = PLAINTEXT 18:36:03 policy-pap | security.providers = null 18:36:03 policy-pap | send.buffer.bytes = 131072 18:36:03 policy-pap | session.timeout.ms = 45000 18:36:03 policy-pap | socket.connection.setup.timeout.max.ms = 30000 18:36:03 policy-pap | socket.connection.setup.timeout.ms = 10000 18:36:03 policy-pap | ssl.cipher.suites = null 18:36:03 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:03 policy-pap | ssl.endpoint.identification.algorithm = https 18:36:03 policy-pap | ssl.engine.factory.class = null 18:36:03 policy-pap | ssl.key.password = null 18:36:03 policy-pap | ssl.keymanager.algorithm = SunX509 18:36:03 policy-pap | ssl.keystore.certificate.chain = null 18:36:03 policy-pap | ssl.keystore.key = null 18:36:03 policy-pap | ssl.keystore.location = null 18:36:03 policy-pap | ssl.keystore.password = null 18:36:03 policy-pap | ssl.keystore.type = JKS 18:36:03 policy-pap | ssl.protocol = TLSv1.3 18:36:03 policy-pap | ssl.provider = null 18:36:03 policy-pap | ssl.secure.random.implementation = null 18:36:03 policy-pap | ssl.trustmanager.algorithm = PKIX 18:36:03 policy-pap | ssl.truststore.certificates = null 18:36:03 policy-pap | ssl.truststore.location = null 18:36:03 policy-pap | ssl.truststore.password = null 18:36:03 policy-pap | ssl.truststore.type = JKS 18:36:03 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:03 policy-pap | 18:36:03 policy-pap | [2025-06-16T18:33:15.377+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:03 policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:03 policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:03 policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098795383 18:36:03 policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 18:36:03 policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|ServiceManager|main] Policy PAP starting topics 18:36:03 policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=2ce83089-4029-40c4-8165-ced703c1674c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 18:36:03 policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=78a84e9c-9f41-4395-81a2-9a0b7c619942, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 18:36:03 policy-pap | [2025-06-16T18:33:15.383+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d86b2547-e3e9-4d78-9a7e-14c8aadfd29d, alive=false, publisher=null]]: starting 18:36:03 policy-pap | [2025-06-16T18:33:15.394+00:00|INFO|ProducerConfig|main] ProducerConfig values: 18:36:03 policy-pap | acks = -1 18:36:03 policy-pap | auto.include.jmx.reporter = true 18:36:03 policy-pap | batch.size = 16384 18:36:03 policy-pap | bootstrap.servers = [kafka:9092] 18:36:03 policy-pap | buffer.memory = 33554432 18:36:03 policy-pap | client.dns.lookup = use_all_dns_ips 18:36:03 policy-pap | client.id = producer-1 18:36:03 policy-pap | compression.gzip.level = -1 18:36:03 policy-pap | compression.lz4.level = 9 18:36:03 policy-pap | compression.type = none 18:36:03 policy-pap | compression.zstd.level = 3 18:36:03 policy-pap | connections.max.idle.ms = 540000 18:36:03 policy-pap | delivery.timeout.ms = 120000 18:36:03 policy-pap | enable.idempotence = true 18:36:03 policy-pap | enable.metrics.push = true 18:36:03 policy-pap | interceptor.classes = [] 18:36:03 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 18:36:03 policy-pap | linger.ms = 0 18:36:03 policy-pap | max.block.ms = 60000 18:36:03 policy-pap | max.in.flight.requests.per.connection = 5 18:36:03 policy-pap | max.request.size = 1048576 18:36:03 policy-pap | metadata.max.age.ms = 300000 18:36:03 policy-pap | metadata.max.idle.ms = 300000 18:36:03 policy-pap | metadata.recovery.strategy = none 18:36:03 policy-pap | metric.reporters = [] 18:36:03 policy-pap | metrics.num.samples = 2 18:36:03 policy-pap | metrics.recording.level = INFO 18:36:03 policy-pap | metrics.sample.window.ms = 30000 18:36:03 policy-pap | partitioner.adaptive.partitioning.enable = true 18:36:03 policy-pap | partitioner.availability.timeout.ms = 0 18:36:03 policy-pap | partitioner.class = null 18:36:03 policy-pap | partitioner.ignore.keys = false 18:36:03 policy-pap | receive.buffer.bytes = 32768 18:36:03 policy-pap | reconnect.backoff.max.ms = 1000 18:36:03 policy-pap | reconnect.backoff.ms = 50 18:36:03 policy-pap | request.timeout.ms = 30000 18:36:03 policy-pap | retries = 2147483647 18:36:03 policy-pap | retry.backoff.max.ms = 1000 18:36:03 policy-pap | retry.backoff.ms = 100 18:36:03 policy-pap | sasl.client.callback.handler.class = null 18:36:03 policy-pap | sasl.jaas.config = null 18:36:03 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:03 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 18:36:03 policy-pap | sasl.kerberos.service.name = null 18:36:03 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:03 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:03 policy-pap | sasl.login.callback.handler.class = null 18:36:03 policy-pap | sasl.login.class = null 18:36:03 policy-pap | sasl.login.connect.timeout.ms = null 18:36:03 policy-pap | sasl.login.read.timeout.ms = null 18:36:03 policy-pap | sasl.login.refresh.buffer.seconds = 300 18:36:03 policy-pap | sasl.login.refresh.min.period.seconds = 60 18:36:03 policy-pap | sasl.login.refresh.window.factor = 0.8 18:36:03 policy-pap | sasl.login.refresh.window.jitter = 0.05 18:36:03 policy-pap | sasl.login.retry.backoff.max.ms = 10000 18:36:03 policy-pap | sasl.login.retry.backoff.ms = 100 18:36:03 policy-pap | sasl.mechanism = GSSAPI 18:36:03 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 18:36:03 policy-pap | sasl.oauthbearer.expected.audience = null 18:36:03 policy-pap | sasl.oauthbearer.expected.issuer = null 18:36:03 policy-pap | sasl.oauthbearer.header.urlencode = false 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 18:36:03 policy-pap | sasl.oauthbearer.scope.claim.name = scope 18:36:03 policy-pap | sasl.oauthbearer.sub.claim.name = sub 18:36:03 policy-pap | sasl.oauthbearer.token.endpoint.url = null 18:36:03 policy-pap | security.protocol = PLAINTEXT 18:36:03 policy-pap | security.providers = null 18:36:03 policy-pap | send.buffer.bytes = 131072 18:36:03 policy-pap | socket.connection.setup.timeout.max.ms = 30000 18:36:03 policy-pap | socket.connection.setup.timeout.ms = 10000 18:36:03 policy-pap | ssl.cipher.suites = null 18:36:03 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:03 policy-pap | ssl.endpoint.identification.algorithm = https 18:36:03 policy-pap | ssl.engine.factory.class = null 18:36:03 policy-pap | ssl.key.password = null 18:36:03 policy-pap | ssl.keymanager.algorithm = SunX509 18:36:03 policy-pap | ssl.keystore.certificate.chain = null 18:36:03 policy-pap | ssl.keystore.key = null 18:36:03 policy-pap | ssl.keystore.location = null 18:36:03 policy-pap | ssl.keystore.password = null 18:36:03 policy-pap | ssl.keystore.type = JKS 18:36:03 policy-pap | ssl.protocol = TLSv1.3 18:36:03 policy-pap | ssl.provider = null 18:36:03 policy-pap | ssl.secure.random.implementation = null 18:36:03 policy-pap | ssl.trustmanager.algorithm = PKIX 18:36:03 policy-pap | ssl.truststore.certificates = null 18:36:03 policy-pap | ssl.truststore.location = null 18:36:03 policy-pap | ssl.truststore.password = null 18:36:03 policy-pap | ssl.truststore.type = JKS 18:36:03 policy-pap | transaction.timeout.ms = 60000 18:36:03 policy-pap | transactional.id = null 18:36:03 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 18:36:03 policy-pap | 18:36:03 policy-pap | [2025-06-16T18:33:15.395+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:03 policy-pap | [2025-06-16T18:33:15.406+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 18:36:03 policy-pap | [2025-06-16T18:33:15.421+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:03 policy-pap | [2025-06-16T18:33:15.421+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:03 policy-pap | [2025-06-16T18:33:15.421+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098795421 18:36:03 policy-pap | [2025-06-16T18:33:15.422+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d86b2547-e3e9-4d78-9a7e-14c8aadfd29d, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 18:36:03 policy-pap | [2025-06-16T18:33:15.422+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b38f4fd3-1c37-4a8a-8424-78fd1d3ae126, alive=false, publisher=null]]: starting 18:36:03 policy-pap | [2025-06-16T18:33:15.423+00:00|INFO|ProducerConfig|main] ProducerConfig values: 18:36:03 policy-pap | acks = -1 18:36:03 policy-pap | auto.include.jmx.reporter = true 18:36:03 policy-pap | batch.size = 16384 18:36:03 policy-pap | bootstrap.servers = [kafka:9092] 18:36:03 policy-pap | buffer.memory = 33554432 18:36:03 policy-pap | client.dns.lookup = use_all_dns_ips 18:36:03 policy-pap | client.id = producer-2 18:36:03 policy-pap | compression.gzip.level = -1 18:36:03 policy-pap | compression.lz4.level = 9 18:36:03 policy-pap | compression.type = none 18:36:03 policy-pap | compression.zstd.level = 3 18:36:03 policy-pap | connections.max.idle.ms = 540000 18:36:03 policy-pap | delivery.timeout.ms = 120000 18:36:03 policy-pap | enable.idempotence = true 18:36:03 policy-pap | enable.metrics.push = true 18:36:03 policy-pap | interceptor.classes = [] 18:36:03 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 18:36:03 policy-pap | linger.ms = 0 18:36:03 policy-pap | max.block.ms = 60000 18:36:03 policy-pap | max.in.flight.requests.per.connection = 5 18:36:03 policy-pap | max.request.size = 1048576 18:36:03 policy-pap | metadata.max.age.ms = 300000 18:36:03 policy-pap | metadata.max.idle.ms = 300000 18:36:03 policy-pap | metadata.recovery.strategy = none 18:36:03 policy-pap | metric.reporters = [] 18:36:03 policy-pap | metrics.num.samples = 2 18:36:03 policy-pap | metrics.recording.level = INFO 18:36:03 policy-pap | metrics.sample.window.ms = 30000 18:36:03 policy-pap | partitioner.adaptive.partitioning.enable = true 18:36:03 policy-pap | partitioner.availability.timeout.ms = 0 18:36:03 policy-pap | partitioner.class = null 18:36:03 policy-pap | partitioner.ignore.keys = false 18:36:03 policy-pap | receive.buffer.bytes = 32768 18:36:03 policy-pap | reconnect.backoff.max.ms = 1000 18:36:03 policy-pap | reconnect.backoff.ms = 50 18:36:03 policy-pap | request.timeout.ms = 30000 18:36:03 policy-pap | retries = 2147483647 18:36:03 policy-pap | retry.backoff.max.ms = 1000 18:36:03 policy-pap | retry.backoff.ms = 100 18:36:03 policy-pap | sasl.client.callback.handler.class = null 18:36:03 policy-pap | sasl.jaas.config = null 18:36:03 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:03 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 18:36:03 policy-pap | sasl.kerberos.service.name = null 18:36:03 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:03 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:03 policy-pap | sasl.login.callback.handler.class = null 18:36:03 policy-pap | sasl.login.class = null 18:36:03 policy-pap | sasl.login.connect.timeout.ms = null 18:36:03 policy-pap | sasl.login.read.timeout.ms = null 18:36:03 policy-pap | sasl.login.refresh.buffer.seconds = 300 18:36:03 policy-pap | sasl.login.refresh.min.period.seconds = 60 18:36:03 policy-pap | sasl.login.refresh.window.factor = 0.8 18:36:03 policy-pap | sasl.login.refresh.window.jitter = 0.05 18:36:03 policy-pap | sasl.login.retry.backoff.max.ms = 10000 18:36:03 policy-pap | sasl.login.retry.backoff.ms = 100 18:36:03 policy-pap | sasl.mechanism = GSSAPI 18:36:03 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 18:36:03 policy-pap | sasl.oauthbearer.expected.audience = null 18:36:03 policy-pap | sasl.oauthbearer.expected.issuer = null 18:36:03 policy-pap | sasl.oauthbearer.header.urlencode = false 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:03 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 18:36:03 policy-pap | sasl.oauthbearer.scope.claim.name = scope 18:36:03 policy-pap | sasl.oauthbearer.sub.claim.name = sub 18:36:03 policy-pap | sasl.oauthbearer.token.endpoint.url = null 18:36:03 policy-pap | security.protocol = PLAINTEXT 18:36:03 policy-pap | security.providers = null 18:36:03 policy-pap | send.buffer.bytes = 131072 18:36:03 policy-pap | socket.connection.setup.timeout.max.ms = 30000 18:36:03 policy-pap | socket.connection.setup.timeout.ms = 10000 18:36:03 policy-pap | ssl.cipher.suites = null 18:36:03 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:03 policy-pap | ssl.endpoint.identification.algorithm = https 18:36:03 policy-pap | ssl.engine.factory.class = null 18:36:03 policy-pap | ssl.key.password = null 18:36:03 policy-pap | ssl.keymanager.algorithm = SunX509 18:36:03 policy-pap | ssl.keystore.certificate.chain = null 18:36:03 policy-pap | ssl.keystore.key = null 18:36:03 policy-pap | ssl.keystore.location = null 18:36:03 policy-pap | ssl.keystore.password = null 18:36:03 policy-pap | ssl.keystore.type = JKS 18:36:03 policy-pap | ssl.protocol = TLSv1.3 18:36:03 policy-pap | ssl.provider = null 18:36:03 policy-pap | ssl.secure.random.implementation = null 18:36:03 policy-pap | ssl.trustmanager.algorithm = PKIX 18:36:03 policy-pap | ssl.truststore.certificates = null 18:36:03 policy-pap | ssl.truststore.location = null 18:36:03 policy-pap | ssl.truststore.password = null 18:36:03 policy-pap | ssl.truststore.type = JKS 18:36:03 policy-pap | transaction.timeout.ms = 60000 18:36:03 policy-pap | transactional.id = null 18:36:03 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 18:36:03 policy-pap | 18:36:03 policy-pap | [2025-06-16T18:33:15.423+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:03 policy-pap | [2025-06-16T18:33:15.424+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 18:36:03 policy-pap | [2025-06-16T18:33:15.427+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:03 policy-pap | [2025-06-16T18:33:15.427+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:03 policy-pap | [2025-06-16T18:33:15.427+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098795427 18:36:03 policy-pap | [2025-06-16T18:33:15.428+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b38f4fd3-1c37-4a8a-8424-78fd1d3ae126, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 18:36:03 policy-pap | [2025-06-16T18:33:15.428+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 18:36:03 policy-pap | [2025-06-16T18:33:15.428+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 18:36:03 policy-pap | [2025-06-16T18:33:15.429+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 18:36:03 policy-pap | [2025-06-16T18:33:15.430+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 18:36:03 policy-pap | [2025-06-16T18:33:15.432+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 18:36:03 policy-pap | [2025-06-16T18:33:15.432+00:00|INFO|TimerManager|Thread-9] timer manager update started 18:36:03 policy-pap | [2025-06-16T18:33:15.433+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 18:36:03 policy-pap | [2025-06-16T18:33:15.433+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 18:36:03 policy-pap | [2025-06-16T18:33:15.433+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 18:36:03 policy-pap | [2025-06-16T18:33:15.433+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 18:36:03 policy-pap | [2025-06-16T18:33:15.434+00:00|INFO|ServiceManager|main] Policy PAP started 18:36:03 policy-pap | [2025-06-16T18:33:15.434+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.482 seconds (process running for 10.056) 18:36:03 policy-pap | [2025-06-16T18:33:15.843+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: DURHhdNSQwy0Fksygi2p2A 18:36:03 policy-pap | [2025-06-16T18:33:15.844+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 18:36:03 policy-pap | [2025-06-16T18:33:15.844+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Cluster ID: DURHhdNSQwy0Fksygi2p2A 18:36:03 policy-pap | [2025-06-16T18:33:15.845+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: DURHhdNSQwy0Fksygi2p2A 18:36:03 policy-pap | [2025-06-16T18:33:15.869+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 18:36:03 policy-pap | [2025-06-16T18:33:15.869+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 18:36:03 policy-pap | [2025-06-16T18:33:15.892+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 18:36:03 policy-pap | [2025-06-16T18:33:15.892+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: DURHhdNSQwy0Fksygi2p2A 18:36:03 policy-pap | [2025-06-16T18:33:16.031+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 18:36:03 policy-pap | [2025-06-16T18:33:16.064+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 18:36:03 policy-pap | [2025-06-16T18:33:16.273+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 18:36:03 policy-pap | [2025-06-16T18:33:16.273+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 18:36:03 policy-pap | [2025-06-16T18:33:16.634+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 18:36:03 policy-pap | [2025-06-16T18:33:16.746+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 18:36:03 policy-pap | [2025-06-16T18:33:17.327+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 18:36:03 policy-pap | [2025-06-16T18:33:17.333+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 18:36:03 policy-pap | [2025-06-16T18:33:17.362+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9 18:36:03 policy-pap | [2025-06-16T18:33:17.362+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 18:36:03 policy-pap | [2025-06-16T18:33:17.681+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 18:36:03 policy-pap | [2025-06-16T18:33:17.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] (Re-)joining group 18:36:03 policy-pap | [2025-06-16T18:33:17.688+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Request joining group due to: need to re-join with the given member-id: consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0 18:36:03 policy-pap | [2025-06-16T18:33:17.688+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] (Re-)joining group 18:36:03 policy-pap | [2025-06-16T18:33:20.387+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9', protocol='range'} 18:36:03 policy-pap | [2025-06-16T18:33:20.398+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9=Assignment(partitions=[policy-pdp-pap-0])} 18:36:03 policy-pap | [2025-06-16T18:33:20.429+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-d8958685-9389-463a-9974-c636038d81b9', protocol='range'} 18:36:03 policy-pap | [2025-06-16T18:33:20.431+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 18:36:03 policy-pap | [2025-06-16T18:33:20.436+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 18:36:03 policy-pap | [2025-06-16T18:33:20.459+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 18:36:03 policy-pap | [2025-06-16T18:33:20.482+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 18:36:03 policy-pap | [2025-06-16T18:33:20.693+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Successfully joined group with generation Generation{generationId=1, memberId='consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0', protocol='range'} 18:36:03 policy-pap | [2025-06-16T18:33:20.694+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Finished assignment for group at generation 1: {consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0=Assignment(partitions=[policy-pdp-pap-0])} 18:36:03 policy-pap | [2025-06-16T18:33:20.700+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Successfully synced group in generation Generation{generationId=1, memberId='consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3-cddc4508-684c-4657-b774-eec93d7842b0', protocol='range'} 18:36:03 policy-pap | [2025-06-16T18:33:20.701+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 18:36:03 policy-pap | [2025-06-16T18:33:20.701+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Adding newly assigned partitions: policy-pdp-pap-0 18:36:03 policy-pap | [2025-06-16T18:33:20.703+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Found no committed offset for partition policy-pdp-pap-0 18:36:03 policy-pap | [2025-06-16T18:33:20.705+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-78a84e9c-9f41-4395-81a2-9a0b7c619942-3, groupId=78a84e9c-9f41-4395-81a2-9a0b7c619942] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 18:36:03 policy-pap | [2025-06-16T18:33:21.953+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 18:36:03 policy-pap | [] 18:36:03 policy-pap | [2025-06-16T18:33:21.954+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"d31f15e3-8200-426a-9c05-c67231bf3e73","timestampMs":1750098797398,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4"} 18:36:03 policy-pap | [2025-06-16T18:33:21.954+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"d31f15e3-8200-426a-9c05-c67231bf3e73","timestampMs":1750098797398,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4"} 18:36:03 policy-pap | [2025-06-16T18:33:21.956+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK 18:36:03 policy-pap | [2025-06-16T18:33:21.957+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_TOPIC_CHECK 18:36:03 policy-pap | [2025-06-16T18:33:22.010+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"421a3372-5f8e-464d-b798-a50b4b48cf6c","timestampMs":1750098801958,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup"} 18:36:03 policy-pap | [2025-06-16T18:33:22.011+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"421a3372-5f8e-464d-b798-a50b4b48cf6c","timestampMs":1750098801958,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup"} 18:36:03 policy-pap | [2025-06-16T18:33:22.017+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 18:36:03 policy-pap | [2025-06-16T18:33:22.620+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting 18:36:03 policy-pap | [2025-06-16T18:33:22.620+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting listener 18:36:03 policy-pap | [2025-06-16T18:33:22.620+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting timer 18:36:03 policy-pap | [2025-06-16T18:33:22.620+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=dbb93529-7620-483d-89b0-797ac3cb8b31, expireMs=1750098832620] 18:36:03 policy-pap | [2025-06-16T18:33:22.621+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting enqueue 18:36:03 policy-pap | [2025-06-16T18:33:22.622+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate started 18:36:03 policy-pap | [2025-06-16T18:33:22.622+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=dbb93529-7620-483d-89b0-797ac3cb8b31, expireMs=1750098832620] 18:36:03 policy-pap | [2025-06-16T18:33:22.625+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"dbb93529-7620-483d-89b0-797ac3cb8b31","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:22.659+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"dbb93529-7620-483d-89b0-797ac3cb8b31","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:22.661+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 18:36:03 policy-pap | [2025-06-16T18:33:22.662+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"dbb93529-7620-483d-89b0-797ac3cb8b31","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:22.662+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 18:36:03 policy-pap | [2025-06-16T18:33:22.764+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"dbb93529-7620-483d-89b0-797ac3cb8b31","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"90b3f482-cbc9-4416-b421-d6129b5f10b4","timestampMs":1750098802750,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:22.765+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping 18:36:03 policy-pap | [2025-06-16T18:33:22.765+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping enqueue 18:36:03 policy-pap | [2025-06-16T18:33:22.765+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping timer 18:36:03 policy-pap | [2025-06-16T18:33:22.765+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=dbb93529-7620-483d-89b0-797ac3cb8b31, expireMs=1750098832620] 18:36:03 policy-pap | [2025-06-16T18:33:22.766+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping listener 18:36:03 policy-pap | [2025-06-16T18:33:22.766+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopped 18:36:03 policy-pap | [2025-06-16T18:33:22.768+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"dbb93529-7620-483d-89b0-797ac3cb8b31","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"90b3f482-cbc9-4416-b421-d6129b5f10b4","timestampMs":1750098802750,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:22.769+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id dbb93529-7620-483d-89b0-797ac3cb8b31 18:36:03 policy-pap | [2025-06-16T18:33:22.772+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"504884f8-f384-4692-b040-357f65737559","timestampMs":1750098802756,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:22.782+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 18:36:03 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.Naming","policy-type-version":"1.0.0","policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 18:36:03 policy-pap | [2025-06-16T18:33:22.783+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate successful 18:36:03 policy-pap | [2025-06-16T18:33:22.783+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 start publishing next request 18:36:03 policy-pap | [2025-06-16T18:33:22.783+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange starting 18:36:03 policy-pap | [2025-06-16T18:33:22.783+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange starting listener 18:36:03 policy-pap | [2025-06-16T18:33:22.784+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange starting timer 18:36:03 policy-pap | [2025-06-16T18:33:22.784+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=100c0bdc-0836-4c51-8f89-991d9512ea35, expireMs=1750098832784] 18:36:03 policy-pap | [2025-06-16T18:33:22.784+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange starting enqueue 18:36:03 policy-pap | [2025-06-16T18:33:22.784+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=100c0bdc-0836-4c51-8f89-991d9512ea35, expireMs=1750098832784] 18:36:03 policy-pap | [2025-06-16T18:33:22.784+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange started 18:36:03 policy-pap | [2025-06-16T18:33:22.785+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"100c0bdc-0836-4c51-8f89-991d9512ea35","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:22.807+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} 18:36:03 policy-pap | [2025-06-16T18:33:23.120+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"504884f8-f384-4692-b040-357f65737559","timestampMs":1750098802756,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:23.121+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 18:36:03 policy-pap | [2025-06-16T18:33:23.125+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"100c0bdc-0836-4c51-8f89-991d9512ea35","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:23.125+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 18:36:03 policy-pap | [2025-06-16T18:33:23.125+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"100c0bdc-0836-4c51-8f89-991d9512ea35","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"39a1e321-3725-4f00-b036-713652cd70c3","timestampMs":1750098802800,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:23.376+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange stopping 18:36:03 policy-pap | [2025-06-16T18:33:23.376+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange stopping enqueue 18:36:03 policy-pap | [2025-06-16T18:33:23.376+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange stopping timer 18:36:03 policy-pap | [2025-06-16T18:33:23.376+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=100c0bdc-0836-4c51-8f89-991d9512ea35, expireMs=1750098832784] 18:36:03 policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange stopping listener 18:36:03 policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange stopped 18:36:03 policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpStateChange successful 18:36:03 policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 start publishing next request 18:36:03 policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting 18:36:03 policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting listener 18:36:03 policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting timer 18:36:03 policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=10aa937b-f7d1-4c76-92ce-87031228576d, expireMs=1750098833377] 18:36:03 policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting enqueue 18:36:03 policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate started 18:36:03 policy-pap | [2025-06-16T18:33:23.377+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"10aa937b-f7d1-4c76-92ce-87031228576d","timestampMs":1750098803112,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:23.383+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"100c0bdc-0836-4c51-8f89-991d9512ea35","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:23.383+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 18:36:03 policy-pap | [2025-06-16T18:33:23.387+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"10aa937b-f7d1-4c76-92ce-87031228576d","timestampMs":1750098803112,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:23.387+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 18:36:03 policy-pap | [2025-06-16T18:33:23.390+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"100c0bdc-0836-4c51-8f89-991d9512ea35","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"39a1e321-3725-4f00-b036-713652cd70c3","timestampMs":1750098802800,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:23.390+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 100c0bdc-0836-4c51-8f89-991d9512ea35 18:36:03 policy-pap | [2025-06-16T18:33:23.400+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"10aa937b-f7d1-4c76-92ce-87031228576d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"05bc3ec8-2c2e-4f60-9242-cc6c3fc1f912","timestampMs":1750098803388,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:23.401+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping 18:36:03 policy-pap | [2025-06-16T18:33:23.401+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping enqueue 18:36:03 policy-pap | [2025-06-16T18:33:23.401+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping timer 18:36:03 policy-pap | [2025-06-16T18:33:23.401+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=10aa937b-f7d1-4c76-92ce-87031228576d, expireMs=1750098833377] 18:36:03 policy-pap | [2025-06-16T18:33:23.401+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping listener 18:36:03 policy-pap | [2025-06-16T18:33:23.401+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopped 18:36:03 policy-pap | [2025-06-16T18:33:23.402+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"10aa937b-f7d1-4c76-92ce-87031228576d","timestampMs":1750098803112,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:23.403+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 18:36:03 policy-pap | [2025-06-16T18:33:23.406+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate successful 18:36:03 policy-pap | [2025-06-16T18:33:23.406+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 has no more requests 18:36:03 policy-pap | [2025-06-16T18:33:23.407+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"10aa937b-f7d1-4c76-92ce-87031228576d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"05bc3ec8-2c2e-4f60-9242-cc6c3fc1f912","timestampMs":1750098803388,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:33:23.408+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 10aa937b-f7d1-4c76-92ce-87031228576d 18:36:03 policy-pap | [2025-06-16T18:33:41.610+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 18:36:03 policy-pap | [2025-06-16T18:33:41.610+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 18:36:03 policy-pap | [2025-06-16T18:33:41.612+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 18:36:03 policy-pap | [2025-06-16T18:33:52.620+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=dbb93529-7620-483d-89b0-797ac3cb8b31, expireMs=1750098832620] 18:36:03 policy-pap | [2025-06-16T18:33:52.784+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=100c0bdc-0836-4c51-8f89-991d9512ea35, expireMs=1750098832784] 18:36:03 policy-pap | [2025-06-16T18:34:35.172+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group defaultGroup 18:36:03 policy-pap | [2025-06-16T18:34:35.173+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy onap.restart.tca 1.0.0 to subgroup defaultGroup xacml count=2 18:36:03 policy-pap | [2025-06-16T18:34:35.174+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy onap.restart.tca 1.0.0 18:36:03 policy-pap | [2025-06-16T18:34:35.174+00:00|INFO|SessionData|http-nio-6969-exec-3] add update xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 defaultGroup xacml policies=1 18:36:03 policy-pap | [2025-06-16T18:34:35.175+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group defaultGroup 18:36:03 policy-pap | [2025-06-16T18:34:35.215+00:00|INFO|SessionData|http-nio-6969-exec-3] use cached group defaultGroup 18:36:03 policy-pap | [2025-06-16T18:34:35.216+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy OSDF_CASABLANCA.Affinity_Default 1.0.0 to subgroup defaultGroup xacml count=3 18:36:03 policy-pap | [2025-06-16T18:34:35.216+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy OSDF_CASABLANCA.Affinity_Default 1.0.0 18:36:03 policy-pap | [2025-06-16T18:34:35.216+00:00|INFO|SessionData|http-nio-6969-exec-3] add update xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 defaultGroup xacml policies=2 18:36:03 policy-pap | [2025-06-16T18:34:35.216+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group defaultGroup 18:36:03 policy-pap | [2025-06-16T18:34:35.216+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group defaultGroup 18:36:03 policy-pap | [2025-06-16T18:34:35.235+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-16T18:34:35Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=OSDF_CASABLANCA.Affinity_Default 1.0.0, action=DEPLOYMENT, timestamp=2025-06-16T18:34:35Z, user=policyadmin)] 18:36:03 policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting 18:36:03 policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting listener 18:36:03 policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting timer 18:36:03 policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|TimerManager|http-nio-6969-exec-3] update timer registered Timer [name=678eb842-8de7-4880-84c1-f110a1ff3c27, expireMs=1750098905268] 18:36:03 policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting enqueue 18:36:03 policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate started 18:36:03 policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=678eb842-8de7-4880-84c1-f110a1ff3c27, expireMs=1750098905268] 18:36:03 policy-pap | [2025-06-16T18:34:35.268+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"678eb842-8de7-4880-84c1-f110a1ff3c27","timestampMs":1750098875216,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:34:35.280+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"678eb842-8de7-4880-84c1-f110a1ff3c27","timestampMs":1750098875216,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:34:35.280+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 18:36:03 policy-pap | [2025-06-16T18:34:35.281+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"678eb842-8de7-4880-84c1-f110a1ff3c27","timestampMs":1750098875216,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:34:35.282+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 18:36:03 policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"678eb842-8de7-4880-84c1-f110a1ff3c27","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"e85dcd01-b32e-47b7-bd0b-30c0aea4d73f","timestampMs":1750098875924,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping 18:36:03 policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping enqueue 18:36:03 policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping timer 18:36:03 policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=678eb842-8de7-4880-84c1-f110a1ff3c27, expireMs=1750098905268] 18:36:03 policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping listener 18:36:03 policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopped 18:36:03 policy-pap | [2025-06-16T18:34:35.932+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"678eb842-8de7-4880-84c1-f110a1ff3c27","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"e85dcd01-b32e-47b7-bd0b-30c0aea4d73f","timestampMs":1750098875924,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:34:35.933+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 678eb842-8de7-4880-84c1-f110a1ff3c27 18:36:03 policy-pap | [2025-06-16T18:34:35.942+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate successful 18:36:03 policy-pap | [2025-06-16T18:34:35.942+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 has no more requests 18:36:03 policy-pap | [2025-06-16T18:34:35.942+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 18:36:03 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0},{"policy-type":"onap.policies.optimization.resource.AffinityPolicy","policy-type-version":"1.0.0","policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 18:36:03 policy-pap | [2025-06-16T18:34:59.939+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 18:36:03 policy-pap | [2025-06-16T18:34:59.940+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup defaultGroup xacml count=2 18:36:03 policy-pap | [2025-06-16T18:34:59.940+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 18:36:03 policy-pap | [2025-06-16T18:34:59.940+00:00|INFO|SessionData|http-nio-6969-exec-5] add update xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 defaultGroup xacml policies=0 18:36:03 policy-pap | [2025-06-16T18:34:59.940+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group defaultGroup 18:36:03 policy-pap | [2025-06-16T18:34:59.941+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group defaultGroup 18:36:03 policy-pap | [2025-06-16T18:34:59.953+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-16T18:34:59Z, user=policyadmin)] 18:36:03 policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting 18:36:03 policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting listener 18:36:03 policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting timer 18:36:03 policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|TimerManager|http-nio-6969-exec-5] update timer registered Timer [name=56415037-05c3-4c38-b9fb-020356e71e7c, expireMs=1750098929962] 18:36:03 policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate starting enqueue 18:36:03 policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate started 18:36:03 policy-pap | [2025-06-16T18:34:59.962+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"56415037-05c3-4c38-b9fb-020356e71e7c","timestampMs":1750098899940,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:34:59.974+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"56415037-05c3-4c38-b9fb-020356e71e7c","timestampMs":1750098899940,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:34:59.974+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"56415037-05c3-4c38-b9fb-020356e71e7c","timestampMs":1750098899940,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:34:59.974+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 18:36:03 policy-pap | [2025-06-16T18:34:59.974+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 18:36:03 policy-pap | [2025-06-16T18:34:59.979+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"56415037-05c3-4c38-b9fb-020356e71e7c","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"3270cf9c-3884-4825-aa2b-8edb8611600f","timestampMs":1750098899970,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:34:59.979+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 56415037-05c3-4c38-b9fb-020356e71e7c 18:36:03 policy-pap | [2025-06-16T18:34:59.985+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"56415037-05c3-4c38-b9fb-020356e71e7c","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"3270cf9c-3884-4825-aa2b-8edb8611600f","timestampMs":1750098899970,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:34:59.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping 18:36:03 policy-pap | [2025-06-16T18:34:59.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping enqueue 18:36:03 policy-pap | [2025-06-16T18:34:59.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping timer 18:36:03 policy-pap | [2025-06-16T18:34:59.985+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=56415037-05c3-4c38-b9fb-020356e71e7c, expireMs=1750098929962] 18:36:03 policy-pap | [2025-06-16T18:34:59.986+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopping listener 18:36:03 policy-pap | [2025-06-16T18:34:59.986+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate stopped 18:36:03 policy-pap | [2025-06-16T18:34:59.999+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 PdpUpdate successful 18:36:03 policy-pap | [2025-06-16T18:34:59.999+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4 has no more requests 18:36:03 policy-pap | [2025-06-16T18:34:59.999+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 18:36:03 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}]} 18:36:03 policy-pap | [2025-06-16T18:35:05.268+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=678eb842-8de7-4880-84c1-f110a1ff3c27, expireMs=1750098905268] 18:36:03 policy-pap | [2025-06-16T18:35:15.435+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 18:36:03 policy-pap | [2025-06-16T18:35:22.775+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:03 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"846e2fcb-c890-4d0f-a2c8-5f3e4f1941ca","timestampMs":1750098922765,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:35:22.775+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"846e2fcb-c890-4d0f-a2c8-5f3e4f1941ca","timestampMs":1750098922765,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-pap | [2025-06-16T18:35:22.776+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 18:36:03 policy-xacml-pdp | Waiting for pap port 6969... 18:36:03 policy-xacml-pdp | pap (172.17.0.8:6969) open 18:36:03 policy-xacml-pdp | Waiting for kafka port 9092... 18:36:03 policy-xacml-pdp | kafka (172.17.0.5:9092) open 18:36:03 policy-xacml-pdp | + KEYSTORE=/opt/app/policy/pdpx/etc/ssl/policy-keystore 18:36:03 policy-xacml-pdp | + TRUSTSTORE=/opt/app/policy/pdpx/etc/ssl/policy-truststore 18:36:03 policy-xacml-pdp | + KEYSTORE_PASSWD=Pol1cy_0nap 18:36:03 policy-xacml-pdp | + TRUSTSTORE_PASSWD=Pol1cy_0nap 18:36:03 policy-xacml-pdp | + '[' 0 -ge 1 ] 18:36:03 policy-xacml-pdp | + CONFIG_FILE= 18:36:03 policy-xacml-pdp | + '[' -z ] 18:36:03 policy-xacml-pdp | + CONFIG_FILE=/opt/app/policy/pdpx/etc/defaultConfig.json 18:36:03 policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-truststore ] 18:36:03 policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-keystore ] 18:36:03 policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/xacml.properties ] 18:36:03 policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/logback.xml ] 18:36:03 policy-xacml-pdp | + echo 'Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json' 18:36:03 policy-xacml-pdp | Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json 18:36:03 policy-xacml-pdp | + /usr/lib/jvm/default-jvm/bin/java -cp '/opt/app/policy/pdpx/etc:/opt/app/policy/pdpx/lib/*' '-Dlogback.configurationFile=/opt/app/policy/pdpx/etc/logback.xml' '-Djavax.net.ssl.keyStore=/opt/app/policy/pdpx/etc/ssl/policy-keystore' '-Djavax.net.ssl.keyStorePassword=Pol1cy_0nap' '-Djavax.net.ssl.trustStore=/opt/app/policy/pdpx/etc/ssl/policy-truststore' '-Djavax.net.ssl.trustStorePassword=Pol1cy_0nap' org.onap.policy.pdpx.main.startstop.Main -c /opt/app/policy/pdpx/etc/defaultConfig.json 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:16.683+00:00|INFO|Main|main] Starting policy xacml pdp service with arguments - [-c, /opt/app/policy/pdpx/etc/defaultConfig.json] 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:16.774+00:00|INFO|XacmlPdpActivator|main] Activator initializing using org.onap.policy.pdpx.main.parameters.XacmlPdpParameterGroup@37858383 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:16.816+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 18:36:03 policy-xacml-pdp | allow.auto.create.topics = true 18:36:03 policy-xacml-pdp | auto.commit.interval.ms = 5000 18:36:03 policy-xacml-pdp | auto.include.jmx.reporter = true 18:36:03 policy-xacml-pdp | auto.offset.reset = latest 18:36:03 policy-xacml-pdp | bootstrap.servers = [kafka:9092] 18:36:03 policy-xacml-pdp | check.crcs = true 18:36:03 policy-xacml-pdp | client.dns.lookup = use_all_dns_ips 18:36:03 policy-xacml-pdp | client.id = consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-1 18:36:03 policy-xacml-pdp | client.rack = 18:36:03 policy-xacml-pdp | connections.max.idle.ms = 540000 18:36:03 policy-xacml-pdp | default.api.timeout.ms = 60000 18:36:03 policy-xacml-pdp | enable.auto.commit = true 18:36:03 policy-xacml-pdp | enable.metrics.push = true 18:36:03 policy-xacml-pdp | exclude.internal.topics = true 18:36:03 policy-xacml-pdp | fetch.max.bytes = 52428800 18:36:03 policy-xacml-pdp | fetch.max.wait.ms = 500 18:36:03 policy-xacml-pdp | fetch.min.bytes = 1 18:36:03 policy-xacml-pdp | group.id = 183ef33a-1420-47be-a802-23c79d9c9b0a 18:36:03 policy-xacml-pdp | group.instance.id = null 18:36:03 policy-xacml-pdp | group.protocol = classic 18:36:03 policy-xacml-pdp | group.remote.assignor = null 18:36:03 policy-xacml-pdp | heartbeat.interval.ms = 3000 18:36:03 policy-xacml-pdp | interceptor.classes = [] 18:36:03 policy-xacml-pdp | internal.leave.group.on.close = true 18:36:03 policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 18:36:03 policy-xacml-pdp | isolation.level = read_uncommitted 18:36:03 policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:03 policy-xacml-pdp | max.partition.fetch.bytes = 1048576 18:36:03 policy-xacml-pdp | max.poll.interval.ms = 300000 18:36:03 policy-xacml-pdp | max.poll.records = 500 18:36:03 policy-xacml-pdp | metadata.max.age.ms = 300000 18:36:03 policy-xacml-pdp | metadata.recovery.strategy = none 18:36:03 policy-xacml-pdp | metric.reporters = [] 18:36:03 policy-xacml-pdp | metrics.num.samples = 2 18:36:03 policy-xacml-pdp | metrics.recording.level = INFO 18:36:03 policy-xacml-pdp | metrics.sample.window.ms = 30000 18:36:03 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 18:36:03 policy-xacml-pdp | receive.buffer.bytes = 65536 18:36:03 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 18:36:03 policy-xacml-pdp | reconnect.backoff.ms = 50 18:36:03 policy-xacml-pdp | request.timeout.ms = 30000 18:36:03 policy-xacml-pdp | retry.backoff.max.ms = 1000 18:36:03 policy-xacml-pdp | retry.backoff.ms = 100 18:36:03 policy-xacml-pdp | sasl.client.callback.handler.class = null 18:36:03 policy-xacml-pdp | sasl.jaas.config = null 18:36:03 policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:03 policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 18:36:03 policy-xacml-pdp | sasl.kerberos.service.name = null 18:36:03 policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:03 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:03 policy-xacml-pdp | sasl.login.callback.handler.class = null 18:36:03 policy-xacml-pdp | sasl.login.class = null 18:36:03 policy-xacml-pdp | sasl.login.connect.timeout.ms = null 18:36:03 policy-xacml-pdp | sasl.login.read.timeout.ms = null 18:36:03 policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 18:36:03 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 18:36:03 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 18:36:03 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 18:36:03 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 18:36:03 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 18:36:03 policy-xacml-pdp | sasl.mechanism = GSSAPI 18:36:03 policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 18:36:03 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null 18:36:03 policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null 18:36:03 policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false 18:36:03 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:03 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:03 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:03 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null 18:36:03 policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope 18:36:03 policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub 18:36:03 policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null 18:36:03 policy-xacml-pdp | security.protocol = PLAINTEXT 18:36:03 policy-xacml-pdp | security.providers = null 18:36:03 policy-xacml-pdp | send.buffer.bytes = 131072 18:36:03 policy-xacml-pdp | session.timeout.ms = 45000 18:36:03 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 18:36:03 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 18:36:03 policy-xacml-pdp | ssl.cipher.suites = null 18:36:03 policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:03 policy-xacml-pdp | ssl.endpoint.identification.algorithm = https 18:36:03 policy-xacml-pdp | ssl.engine.factory.class = null 18:36:03 policy-xacml-pdp | ssl.key.password = null 18:36:03 policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 18:36:03 policy-xacml-pdp | ssl.keystore.certificate.chain = null 18:36:03 policy-xacml-pdp | ssl.keystore.key = null 18:36:03 policy-xacml-pdp | ssl.keystore.location = null 18:36:03 policy-xacml-pdp | ssl.keystore.password = null 18:36:03 policy-xacml-pdp | ssl.keystore.type = JKS 18:36:03 policy-xacml-pdp | ssl.protocol = TLSv1.3 18:36:03 policy-xacml-pdp | ssl.provider = null 18:36:03 policy-xacml-pdp | ssl.secure.random.implementation = null 18:36:03 policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX 18:36:03 policy-xacml-pdp | ssl.truststore.certificates = null 18:36:03 policy-xacml-pdp | ssl.truststore.location = null 18:36:03 policy-xacml-pdp | ssl.truststore.password = null 18:36:03 policy-xacml-pdp | ssl.truststore.type = JKS 18:36:03 policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:16.851+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:16.983+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:16.983+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:16.983+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098796982 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:16.985+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-1, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Subscribed to topic(s): policy-pdp-pap 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.042+00:00|INFO|XacmlPdpApplicationManager|main] Initialization applications org.onap.policy.pdpx.main.parameters.XacmlApplicationParameters@7ec3394b JerseyClient(name=policyApiParameters, https=false, selfSignedCerts=false, hostname=policy-api, port=6969, basePath=null, userName=policyadmin, password=zb!XztG34, client=org.glassfish.jersey.client.JerseyClient@698122b2, baseUrl=http://policy-api:6969/, alive=true) 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.053+00:00|INFO|XacmlPdpApplicationManager|main] Application guard supports [onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0] 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.054+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath guard at this path /opt/app/policy/pdpx/apps/guard 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.054+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/guard 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.055+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/guard/xacml.properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.055+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 18:36:03 policy-xacml-pdp | {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.055+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.persistenceunit -> OperationsHistoryPU 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.055+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.name -> GetOperationOutcome 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.055+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.055+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.description -> Returns operation outcome 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.description -> Returns operation counts based on time window 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.password -> policy_user 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.issuer -> urn:org:onap:xacml:guard:get-operation-outcome 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.persistenceunit -> OperationsHistoryPU 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.driver -> org.postgresql.Driver 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.name -> CountRecentOperations 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.056+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.url -> jdbc:postgresql://postgres:5432/operationshistory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.user -> policy_user 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.issuer -> urn:org:onap:xacml:guard:count-recent-operations 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|XacmlPolicyUtils|main] xacml.pip.engines -> count-recent-operations,get-operation-outcome 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.057+00:00|INFO|StdXacmlApplicationServiceProvider|main] {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.059+00:00|WARN|XACMLProperties|main] Properties file /usr/lib/jvm/java-17-openjdk/lib/xacml.properties cannot be read. 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPdpApplicationManager|main] Application optimization supports [onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0] 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath optimization at this path /opt/app/policy/pdpx/apps/optimization 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/optimization 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/optimization/xacml.properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 18:36:03 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.086+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.087+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.088+00:00|INFO|XacmlPdpApplicationManager|main] Application naming supports [onap.policies.Naming 1.0.0] 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.088+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath naming at this path /opt/app/policy/pdpx/apps/naming 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.088+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/naming 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.088+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/naming/xacml.properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 18:36:03 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.089+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPdpApplicationManager|main] Application native supports [onap.policies.native.Xacml 1.0.0, onap.policies.native.ToscaXacml 1.0.0] 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath native at this path /opt/app/policy/pdpx/apps/native 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/native 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/native/xacml.properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 18:36:03 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.092+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.093+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.094+00:00|INFO|XacmlPdpApplicationManager|main] Application match supports [onap.policies.Match 1.0.0] 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.094+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath match at this path /opt/app/policy/pdpx/apps/match 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.094+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/match 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.094+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/match/xacml.properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.094+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 18:36:03 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.095+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|XacmlPdpApplicationManager|main] Application monitoring supports [onap.Monitoring 1.0.0] 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath monitoring at this path /opt/app/policy/pdpx/apps/monitoring 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/monitoring 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/monitoring/xacml.properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 18:36:03 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.096+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.097+00:00|INFO|XacmlPdpApplicationManager|main] Finished applications initialization {optimize=org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplication@2b95e48b, native=org.onap.policy.xacml.pdp.application.nativ.NativePdpApplication@4a3329b9, guard=org.onap.policy.xacml.pdp.application.guard.GuardPdpApplication@3dddefd8, naming=org.onap.policy.xacml.pdp.application.naming.NamingPdpApplication@160ac7fb, match=org.onap.policy.xacml.pdp.application.match.MatchPdpApplication@12bfd80d, configure=org.onap.policy.xacml.pdp.application.monitoring.MonitoringPdpApplication@41925502} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.114+00:00|INFO|XacmlPdpHearbeatPublisher|main] heartbeat topic probe 4000ms 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.299+00:00|INFO|ServiceManager|main] service manager starting 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.300+00:00|INFO|ServiceManager|main] service manager starting XACML PDP parameters 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.300+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.300+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183ef33a-1420-47be-a802-23c79d9c9b0a, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5f574cc2 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.312+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183ef33a-1420-47be-a802-23c79d9c9b0a, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.312+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 18:36:03 policy-xacml-pdp | allow.auto.create.topics = true 18:36:03 policy-xacml-pdp | auto.commit.interval.ms = 5000 18:36:03 policy-xacml-pdp | auto.include.jmx.reporter = true 18:36:03 policy-xacml-pdp | auto.offset.reset = latest 18:36:03 policy-xacml-pdp | bootstrap.servers = [kafka:9092] 18:36:03 policy-xacml-pdp | check.crcs = true 18:36:03 policy-xacml-pdp | client.dns.lookup = use_all_dns_ips 18:36:03 policy-xacml-pdp | client.id = consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2 18:36:03 policy-xacml-pdp | client.rack = 18:36:03 policy-xacml-pdp | connections.max.idle.ms = 540000 18:36:03 policy-xacml-pdp | default.api.timeout.ms = 60000 18:36:03 policy-xacml-pdp | enable.auto.commit = true 18:36:03 policy-xacml-pdp | enable.metrics.push = true 18:36:03 policy-xacml-pdp | exclude.internal.topics = true 18:36:03 policy-xacml-pdp | fetch.max.bytes = 52428800 18:36:03 policy-xacml-pdp | fetch.max.wait.ms = 500 18:36:03 policy-xacml-pdp | fetch.min.bytes = 1 18:36:03 policy-xacml-pdp | group.id = 183ef33a-1420-47be-a802-23c79d9c9b0a 18:36:03 policy-xacml-pdp | group.instance.id = null 18:36:03 policy-xacml-pdp | group.protocol = classic 18:36:03 policy-xacml-pdp | group.remote.assignor = null 18:36:03 policy-xacml-pdp | heartbeat.interval.ms = 3000 18:36:03 policy-xacml-pdp | interceptor.classes = [] 18:36:03 policy-xacml-pdp | internal.leave.group.on.close = true 18:36:03 policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 18:36:03 policy-xacml-pdp | isolation.level = read_uncommitted 18:36:03 policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:03 policy-xacml-pdp | max.partition.fetch.bytes = 1048576 18:36:03 policy-xacml-pdp | max.poll.interval.ms = 300000 18:36:03 policy-xacml-pdp | max.poll.records = 500 18:36:03 policy-xacml-pdp | metadata.max.age.ms = 300000 18:36:03 policy-xacml-pdp | metadata.recovery.strategy = none 18:36:03 policy-xacml-pdp | metric.reporters = [] 18:36:03 policy-xacml-pdp | metrics.num.samples = 2 18:36:03 policy-xacml-pdp | metrics.recording.level = INFO 18:36:03 policy-xacml-pdp | metrics.sample.window.ms = 30000 18:36:03 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 18:36:03 policy-xacml-pdp | receive.buffer.bytes = 65536 18:36:03 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 18:36:03 policy-xacml-pdp | reconnect.backoff.ms = 50 18:36:03 policy-xacml-pdp | request.timeout.ms = 30000 18:36:03 policy-xacml-pdp | retry.backoff.max.ms = 1000 18:36:03 policy-xacml-pdp | retry.backoff.ms = 100 18:36:03 policy-xacml-pdp | sasl.client.callback.handler.class = null 18:36:03 policy-xacml-pdp | sasl.jaas.config = null 18:36:03 policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:03 policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 18:36:03 policy-xacml-pdp | sasl.kerberos.service.name = null 18:36:03 policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:03 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:03 policy-xacml-pdp | sasl.login.callback.handler.class = null 18:36:03 policy-xacml-pdp | sasl.login.class = null 18:36:03 policy-xacml-pdp | sasl.login.connect.timeout.ms = null 18:36:03 policy-xacml-pdp | sasl.login.read.timeout.ms = null 18:36:03 policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 18:36:03 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 18:36:03 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 18:36:03 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 18:36:03 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 18:36:03 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 18:36:03 policy-xacml-pdp | sasl.mechanism = GSSAPI 18:36:03 policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 18:36:03 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null 18:36:03 policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null 18:36:03 policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false 18:36:03 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:03 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:03 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:03 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null 18:36:03 policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope 18:36:03 policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub 18:36:03 policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null 18:36:03 policy-xacml-pdp | security.protocol = PLAINTEXT 18:36:03 policy-xacml-pdp | security.providers = null 18:36:03 policy-xacml-pdp | send.buffer.bytes = 131072 18:36:03 policy-xacml-pdp | session.timeout.ms = 45000 18:36:03 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 18:36:03 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 18:36:03 policy-xacml-pdp | ssl.cipher.suites = null 18:36:03 policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:03 policy-xacml-pdp | ssl.endpoint.identification.algorithm = https 18:36:03 policy-xacml-pdp | ssl.engine.factory.class = null 18:36:03 policy-xacml-pdp | ssl.key.password = null 18:36:03 policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 18:36:03 policy-xacml-pdp | ssl.keystore.certificate.chain = null 18:36:03 policy-xacml-pdp | ssl.keystore.key = null 18:36:03 policy-xacml-pdp | ssl.keystore.location = null 18:36:03 policy-xacml-pdp | ssl.keystore.password = null 18:36:03 policy-xacml-pdp | ssl.keystore.type = JKS 18:36:03 policy-xacml-pdp | ssl.protocol = TLSv1.3 18:36:03 policy-xacml-pdp | ssl.provider = null 18:36:03 policy-xacml-pdp | ssl.secure.random.implementation = null 18:36:03 policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX 18:36:03 policy-xacml-pdp | ssl.truststore.certificates = null 18:36:03 policy-xacml-pdp | ssl.truststore.location = null 18:36:03 policy-xacml-pdp | ssl.truststore.password = null 18:36:03 policy-xacml-pdp | ssl.truststore.type = JKS 18:36:03 policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.313+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.326+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.326+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.326+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098797326 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.326+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Subscribed to topic(s): policy-pdp-pap 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.326+00:00|INFO|ServiceManager|main] service manager starting topics 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.327+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183ef33a-1420-47be-a802-23c79d9c9b0a, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.327+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9411ee5f-cd70-4cb0-9055-3ef4a34488c1, alive=false, publisher=null]]: starting 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.335+00:00|INFO|ProducerConfig|main] ProducerConfig values: 18:36:03 policy-xacml-pdp | acks = -1 18:36:03 policy-xacml-pdp | auto.include.jmx.reporter = true 18:36:03 policy-xacml-pdp | batch.size = 16384 18:36:03 policy-xacml-pdp | bootstrap.servers = [kafka:9092] 18:36:03 policy-xacml-pdp | buffer.memory = 33554432 18:36:03 policy-xacml-pdp | client.dns.lookup = use_all_dns_ips 18:36:03 policy-xacml-pdp | client.id = producer-1 18:36:03 policy-xacml-pdp | compression.gzip.level = -1 18:36:03 policy-xacml-pdp | compression.lz4.level = 9 18:36:03 policy-xacml-pdp | compression.type = none 18:36:03 policy-xacml-pdp | compression.zstd.level = 3 18:36:03 policy-xacml-pdp | connections.max.idle.ms = 540000 18:36:03 policy-xacml-pdp | delivery.timeout.ms = 120000 18:36:03 policy-xacml-pdp | enable.idempotence = true 18:36:03 policy-xacml-pdp | enable.metrics.push = true 18:36:03 policy-xacml-pdp | interceptor.classes = [] 18:36:03 policy-xacml-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 18:36:03 policy-xacml-pdp | linger.ms = 0 18:36:03 policy-xacml-pdp | max.block.ms = 60000 18:36:03 policy-xacml-pdp | max.in.flight.requests.per.connection = 5 18:36:03 policy-xacml-pdp | max.request.size = 1048576 18:36:03 policy-xacml-pdp | metadata.max.age.ms = 300000 18:36:03 policy-xacml-pdp | metadata.max.idle.ms = 300000 18:36:03 policy-xacml-pdp | metadata.recovery.strategy = none 18:36:03 policy-xacml-pdp | metric.reporters = [] 18:36:03 policy-xacml-pdp | metrics.num.samples = 2 18:36:03 policy-xacml-pdp | metrics.recording.level = INFO 18:36:03 policy-xacml-pdp | metrics.sample.window.ms = 30000 18:36:03 policy-xacml-pdp | partitioner.adaptive.partitioning.enable = true 18:36:03 policy-xacml-pdp | partitioner.availability.timeout.ms = 0 18:36:03 policy-xacml-pdp | partitioner.class = null 18:36:03 policy-xacml-pdp | partitioner.ignore.keys = false 18:36:03 policy-xacml-pdp | receive.buffer.bytes = 32768 18:36:03 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 18:36:03 policy-xacml-pdp | reconnect.backoff.ms = 50 18:36:03 policy-xacml-pdp | request.timeout.ms = 30000 18:36:03 policy-xacml-pdp | retries = 2147483647 18:36:03 policy-xacml-pdp | retry.backoff.max.ms = 1000 18:36:03 policy-xacml-pdp | retry.backoff.ms = 100 18:36:03 policy-xacml-pdp | sasl.client.callback.handler.class = null 18:36:03 policy-xacml-pdp | sasl.jaas.config = null 18:36:03 policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:03 policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 18:36:03 policy-xacml-pdp | sasl.kerberos.service.name = null 18:36:03 policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:03 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:03 policy-xacml-pdp | sasl.login.callback.handler.class = null 18:36:03 policy-xacml-pdp | sasl.login.class = null 18:36:03 policy-xacml-pdp | sasl.login.connect.timeout.ms = null 18:36:03 policy-xacml-pdp | sasl.login.read.timeout.ms = null 18:36:03 policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 18:36:03 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 18:36:03 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 18:36:03 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 18:36:03 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 18:36:03 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 18:36:03 policy-xacml-pdp | sasl.mechanism = GSSAPI 18:36:03 policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 18:36:03 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null 18:36:03 policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null 18:36:03 policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false 18:36:03 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:03 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:03 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:03 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null 18:36:03 policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope 18:36:03 policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub 18:36:03 policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null 18:36:03 policy-xacml-pdp | security.protocol = PLAINTEXT 18:36:03 policy-xacml-pdp | security.providers = null 18:36:03 policy-xacml-pdp | send.buffer.bytes = 131072 18:36:03 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 18:36:03 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 18:36:03 policy-xacml-pdp | ssl.cipher.suites = null 18:36:03 policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:03 policy-xacml-pdp | ssl.endpoint.identification.algorithm = https 18:36:03 policy-xacml-pdp | ssl.engine.factory.class = null 18:36:03 policy-xacml-pdp | ssl.key.password = null 18:36:03 policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 18:36:03 policy-xacml-pdp | ssl.keystore.certificate.chain = null 18:36:03 policy-xacml-pdp | ssl.keystore.key = null 18:36:03 policy-xacml-pdp | ssl.keystore.location = null 18:36:03 policy-xacml-pdp | ssl.keystore.password = null 18:36:03 policy-xacml-pdp | ssl.keystore.type = JKS 18:36:03 policy-xacml-pdp | ssl.protocol = TLSv1.3 18:36:03 policy-xacml-pdp | ssl.provider = null 18:36:03 policy-xacml-pdp | ssl.secure.random.implementation = null 18:36:03 policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX 18:36:03 policy-xacml-pdp | ssl.truststore.certificates = null 18:36:03 policy-xacml-pdp | ssl.truststore.location = null 18:36:03 policy-xacml-pdp | ssl.truststore.password = null 18:36:03 policy-xacml-pdp | ssl.truststore.type = JKS 18:36:03 policy-xacml-pdp | transaction.timeout.ms = 60000 18:36:03 policy-xacml-pdp | transactional.id = null 18:36:03 policy-xacml-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.335+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.359+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.389+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.389+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.389+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750098797389 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9411ee5f-cd70-4cb0-9055-3ef4a34488c1, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|ServiceManager|main] service manager starting Terminate PDP 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|ServiceManager|main] service manager starting Heartbeat Publisher 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|ServiceManager|main] service manager starting REST Server 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|ServiceManager|main] service manager starting 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.411+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183ef33a-1420-47be-a802-23c79d9c9b0a, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: registering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007fc2942adb70@357358c2 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.412+00:00|INFO|SingleThreadedBusTopicSource|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183ef33a-1420-47be-a802-23c79d9c9b0a, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=2]]]]: register: start not attempted 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.414+00:00|INFO|OrderedServiceImpl|pool-2-thread-1] ***** OrderedServiceImpl implementers: 18:36:03 policy-xacml-pdp | [] 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.416+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"d31f15e3-8200-426a-9c05-c67231bf3e73","timestampMs":1750098797398,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.390+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.420+00:00|INFO|ServiceManager|main] service manager started 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.421+00:00|INFO|ServiceManager|main] service manager started 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.421+00:00|INFO|Main|main] Started policy-xacml-pdp service successfully. 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.425+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.773+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Cluster ID: DURHhdNSQwy0Fksygi2p2A 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.774+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: DURHhdNSQwy0Fksygi2p2A 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.774+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.775+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.781+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] (Re-)joining group 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.798+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Request joining group due to: need to re-join with the given member-id: consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:17.799+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] (Re-)joining group 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:18.030+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:18.031+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:20.803+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf', protocol='range'} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:20.813+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Finished assignment for group at generation 1: {consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf=Assignment(partitions=[policy-pdp-pap-0])} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:20.822+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2-34109e3f-3432-42e1-84d6-be30d27376bf', protocol='range'} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:20.822+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:20.824+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Adding newly assigned partitions: policy-pdp-pap-0 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:20.831+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Found no committed offset for partition policy-pdp-pap-0 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:20.839+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-183ef33a-1420-47be-a802-23c79d9c9b0a-2, groupId=183ef33a-1420-47be-a802-23c79d9c9b0a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:21.884+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"d31f15e3-8200-426a-9c05-c67231bf3e73","timestampMs":1750098797398,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:21.949+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"d31f15e3-8200-426a-9c05-c67231bf3e73","timestampMs":1750098797398,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:21.951+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:21.951+00:00|INFO|BidirectionalTopicClient|KAFKA-source-policy-pdp-pap] topic policy-pdp-pap is ready; found matching message PdpTopicCheck(super=PdpMessage(messageName=PDP_TOPIC_CHECK, requestId=d31f15e3-8200-426a-9c05-c67231bf3e73, timestampMs=1750098797398, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=null, pdpSubgroup=null)) 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:21.957+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=183ef33a-1420-47be-a802-23c79d9c9b0a, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=1, locked=false, #topicListeners=2]]]]: unregistering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007fc2942adb70@357358c2 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:21.960+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=421a3372-5f8e-464d-b798-a50b4b48cf6c, timestampMs=1750098801958, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=null), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[], deploymentInstanceInfo=null, properties=null, response=null) 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:21.970+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"421a3372-5f8e-464d-b798-a50b4b48cf6c","timestampMs":1750098801958,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.010+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"421a3372-5f8e-464d-b798-a50b4b48cf6c","timestampMs":1750098801958,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.010+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.659+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"dbb93529-7620-483d-89b0-797ac3cb8b31","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.670+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=dbb93529-7620-483d-89b0-797ac3cb8b31, timestampMs=1750098802598, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-df98c171-81af-48a2-b20e-6b7c42a0d39b, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.Naming, typeVersion=1.0.0, properties={policy-instance-name=ONAP_NF_NAMING_TIMESTAMP, naming-models=[{naming-type=VNF, naming-recipe=AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP, name-operation=to_lower_case(), naming-properties=[{property-name=AIC_CLOUD_REGION}, {property-name=CONSTANT, property-value=onap-nf}, {property-name=TIMESTAMP}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VNFC, naming-recipe=VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=ENTIRETY, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}, {property-name=NFC_NAMING_CODE}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VF-MODULE, naming-recipe=VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-value=-, property-name=DELIMITER}, {property-name=VF_MODULE_LABEL}, {property-name=VF_MODULE_TYPE}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=PRECEEDING, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}]}]}))], policiesToBeUndeployed=[]) 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.678+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP type: onap.policies.Naming weight: null policy: 18:36:03 policy-xacml-pdp | {"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.736+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | onap.policies.Naming 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | onap.policies.Naming 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 1.0.0 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | Default is to PERMIT if the policy matches. 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | {"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}} 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | onap.policies.Naming 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.742+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:03 policy-xacml-pdp | /opt/app/policy/pdpx/apps/naming/xacml.properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.749+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP, policy-version=1.0.0} into application naming 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.750+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"dbb93529-7620-483d-89b0-797ac3cb8b31","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"90b3f482-cbc9-4416-b421-d6129b5f10b4","timestampMs":1750098802750,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.756+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=504884f8-f384-4692-b040-357f65737559, timestampMs=1750098802756, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.761+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"504884f8-f384-4692-b040-357f65737559","timestampMs":1750098802756,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.761+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"dbb93529-7620-483d-89b0-797ac3cb8b31","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"90b3f482-cbc9-4416-b421-d6129b5f10b4","timestampMs":1750098802750,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.762+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.772+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"504884f8-f384-4692-b040-357f65737559","timestampMs":1750098802756,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.773+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.798+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"100c0bdc-0836-4c51-8f89-991d9512ea35","timestampMs":1750098802598,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.799+00:00|INFO|XacmlPdpStateChangeListener|KAFKA-source-policy-pdp-pap] PDP State Change message has been received from the PAP - PdpStateChange(super=PdpMessage(messageName=PDP_STATE_CHANGE, requestId=100c0bdc-0836-4c51-8f89-991d9512ea35, timestampMs=1750098802598, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-df98c171-81af-48a2-b20e-6b7c42a0d39b, state=ACTIVE) 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.800+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] set state of org.onap.policy.pdpx.main.XacmlState@76fe1a06 to ACTIVE 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.800+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] State change: ACTIVE - Starting rest controller 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.800+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"100c0bdc-0836-4c51-8f89-991d9512ea35","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"39a1e321-3725-4f00-b036-713652cd70c3","timestampMs":1750098802800,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.810+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"100c0bdc-0836-4c51-8f89-991d9512ea35","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"39a1e321-3725-4f00-b036-713652cd70c3","timestampMs":1750098802800,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:22.811+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:23.387+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"10aa937b-f7d1-4c76-92ce-87031228576d","timestampMs":1750098803112,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:23.388+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=10aa937b-f7d1-4c76-92ce-87031228576d, timestampMs=1750098803112, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-df98c171-81af-48a2-b20e-6b7c42a0d39b, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[], policiesToBeUndeployed=[]) 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:23.388+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"10aa937b-f7d1-4c76-92ce-87031228576d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"05bc3ec8-2c2e-4f60-9242-cc6c3fc1f912","timestampMs":1750098803388,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:23.397+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"10aa937b-f7d1-4c76-92ce-87031228576d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"05bc3ec8-2c2e-4f60-9242-cc6c3fc1f912","timestampMs":1750098803388,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:23.398+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:35.668+00:00|INFO|RequestLog|qtp2014233765-33] 172.17.0.2 - policyadmin [16/Jun/2025:18:33:35 +0000] "GET /metrics HTTP/1.1" 200 2135 "" "Prometheus/3.4.1" 18:36:03 policy-xacml-pdp | [2025-06-16T18:33:43.269+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.1 - - [16/Jun/2025:18:33:43 +0000] "GET / HTTP/1.1" 401 423 "" "curl/7.58.0" 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:31.737+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:31 +0000] "GET /policy/pdpx/v1/healthcheck?null HTTP/1.1" 200 110 "" "python-requests/2.32.4" 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:31.752+00:00|INFO|RequestLog|qtp2014233765-28] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:31 +0000] "GET /metrics?null HTTP/1.1" 200 2057 "" "python-requests/2.32.4" 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.220+00:00|INFO|GuardTranslator|qtp2014233765-28] Converting Request DecisionRequest(onapName=Guard, onapComponent=Guard-component, onapInstance=Guard-component-instance, requestId=unique-request-guard-1, context=null, action=guard, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={guard={actor=APPC, operation=ModifyConfig, target=f17face5-69cb-4c88-9e0b-7426db7edddd, requestId=c7c6a4aa-bb61-4a15-b831-ba1472dd4a65, clname=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}}) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.238+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-dateTime 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.238+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-date 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.238+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-time 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.238+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:timezone 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:vf-count 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-name 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-id 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-type 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.nf-naming-code 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:vserver.vserver-id 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.239+00:00|WARN|RequestParser|qtp2014233765-28] Unable to extract attribute value from object: urn:org:onap:guard:target:cloud-region.cloud-region-id 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.243+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Constructed using properties {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.243+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Initializing OnapPolicyFinderFactory Properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.243+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Combining root policies with urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.249+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Root Policies: 1 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.249+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Referenced Policies: 0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.250+00:00|INFO|StdPolicyFinder|qtp2014233765-28] Updating policy map with policy 3bd63012-99d0-49f6-b77a-63bc4920dbc6 version 1.0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.252+00:00|INFO|StdOnapPip|qtp2014233765-28] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.330+00:00|INFO|LogHelper|qtp2014233765-28] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.364+00:00|INFO|Version|qtp2014233765-28] HHH000412: Hibernate ORM core version 6.6.16.Final 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.386+00:00|INFO|RegionFactoryInitiator|qtp2014233765-28] HHH000026: Second-level cache disabled 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.514+00:00|WARN|pooling|qtp2014233765-28] HHH10001002: Using built-in connection pool (not intended for production use) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:33.723+00:00|INFO|pooling|qtp2014233765-28] HHH10001005: Database info: 18:36:03 policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] 18:36:03 policy-xacml-pdp | Database driver: org.postgresql.Driver 18:36:03 policy-xacml-pdp | Database version: 16.4 18:36:03 policy-xacml-pdp | Autocommit mode: false 18:36:03 policy-xacml-pdp | Isolation level: undefined/unknown 18:36:03 policy-xacml-pdp | Minimum pool size: 1 18:36:03 policy-xacml-pdp | Maximum pool size: 20 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:34.591+00:00|INFO|JtaPlatformInitiator|qtp2014233765-28] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:34.625+00:00|INFO|StdOnapPip|qtp2014233765-28] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:34.629+00:00|INFO|LogHelper|qtp2014233765-28] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:34.631+00:00|INFO|RegionFactoryInitiator|qtp2014233765-28] HHH000026: Second-level cache disabled 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:34.648+00:00|WARN|pooling|qtp2014233765-28] HHH10001002: Using built-in connection pool (not intended for production use) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:34.663+00:00|INFO|pooling|qtp2014233765-28] HHH10001005: Database info: 18:36:03 policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] 18:36:03 policy-xacml-pdp | Database driver: org.postgresql.Driver 18:36:03 policy-xacml-pdp | Database version: 16.4 18:36:03 policy-xacml-pdp | Autocommit mode: false 18:36:03 policy-xacml-pdp | Isolation level: undefined/unknown 18:36:03 policy-xacml-pdp | Minimum pool size: 1 18:36:03 policy-xacml-pdp | Maximum pool size: 20 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:34.692+00:00|INFO|JtaPlatformInitiator|qtp2014233765-28] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:34.695+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-28] Elapsed Time: 1456ms 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:34.696+00:00|INFO|GuardTranslator|qtp2014233765-28] Converting Response {results=[{decision=NotApplicable,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component-instance}],includeInResults=true}{attributeId=urn:org:onap:guard:request:request-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=unique-request-guard-1}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:guard:clname:clname-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}],includeInResults=true}{attributeId=urn:org:onap:guard:actor:actor-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=APPC}],includeInResults=true}{attributeId=urn:org:onap:guard:operation:operation-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ModifyConfig}],includeInResults=true}{attributeId=urn:org:onap:guard:target:target-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=f17face5-69cb-4c88-9e0b-7426db7edddd}],includeInResults=true}]}]}]} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:34.699+00:00|INFO|RequestLog|qtp2014233765-28] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:33 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 19 "" "python-requests/2.32.4" 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.283+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"678eb842-8de7-4880-84c1-f110a1ff3c27","timestampMs":1750098875216,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.283+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=678eb842-8de7-4880-84c1-f110a1ff3c27, timestampMs=1750098875216, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-df98c171-81af-48a2-b20e-6b7c42a0d39b, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.monitoring.tcagen2, typeVersion=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}})), ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.optimization.resource.AffinityPolicy, typeVersion=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}))], policiesToBeUndeployed=[]) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.284+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: onap.restart.tca type: onap.policies.monitoring.tcagen2 weight: null policy: 18:36:03 policy-xacml-pdp | {"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.319+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | onap.restart.tca 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | onap.policies.monitoring.tcagen2 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | onap.policies.monitoring.tcagen2 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 1.0.0 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | Default is to PERMIT if the policy matches. 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | onap.restart.tca 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | {"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}} 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | onap.policies.monitoring.tcagen2 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.319+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:03 policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.320+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} into application monitoring 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.320+00:00|INFO|OptimizationPdpApplication|KAFKA-source-policy-pdp-pap] optimization can support onap.policies.optimization.resource.AffinityPolicy 1.0.0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.321+00:00|ERROR|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] PolicyType not found in data area yet /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml 18:36:03 policy-xacml-pdp | java.nio.file.NoSuchFileException: /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml 18:36:03 policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) 18:36:03 policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) 18:36:03 policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) 18:36:03 policy-xacml-pdp | at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:218) 18:36:03 policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:380) 18:36:03 policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:432) 18:36:03 policy-xacml-pdp | at java.base/java.nio.file.Files.readAllBytes(Files.java:3288) 18:36:03 policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.loadPolicyType(StdMatchableTranslator.java:515) 18:36:03 policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.findPolicyType(StdMatchableTranslator.java:480) 18:36:03 policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.convertPolicy(StdMatchableTranslator.java:241) 18:36:03 policy-xacml-pdp | at org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplicationTranslator.convertPolicy(OptimizationPdpApplicationTranslator.java:72) 18:36:03 policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdXacmlApplicationServiceProvider.loadPolicy(StdXacmlApplicationServiceProvider.java:127) 18:36:03 policy-xacml-pdp | at org.onap.policy.pdpx.main.rest.XacmlPdpApplicationManager.loadDeployedPolicy(XacmlPdpApplicationManager.java:199) 18:36:03 policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.XacmlPdpUpdatePublisher.handlePdpUpdate(XacmlPdpUpdatePublisher.java:91) 18:36:03 policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:72) 18:36:03 policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:36) 18:36:03 policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.ScoListener.onTopicEvent(ScoListener.java:75) 18:36:03 policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher.onTopicEvent(MessageTypeDispatcher.java:97) 18:36:03 policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.JsonListener.onTopicEvent(JsonListener.java:61) 18:36:03 policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.TopicBase.broadcast(TopicBase.java:170) 18:36:03 policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.fetchAllMessages(SingleThreadedBusTopicSource.java:252) 18:36:03 policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.run(SingleThreadedBusTopicSource.java:235) 18:36:03 policy-xacml-pdp | at java.base/java.lang.Thread.run(Thread.java:840) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.349+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.352+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.576+00:00|INFO|RequestLog|qtp2014233765-32] 172.17.0.2 - policyadmin [16/Jun/2025:18:34:35 +0000] "GET /metrics HTTP/1.1" 200 2179 "" "Prometheus/3.4.1" 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.860+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] Successfully pulled onap.policies.optimization.resource.AffinityPolicy 1.0.0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.889+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.resource.AffinityPolicy:1.0.0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.890+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Retrieving datatype policy.data.affinityProperties_properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.890+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.Resource:1.0.0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.890+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.Optimization:1.0.0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.890+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Found root - done scanning 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.891+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: OSDF_CASABLANCA.Affinity_Default type: onap.policies.optimization.resource.AffinityPolicy weight: 0 policy: 18:36:03 policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.907+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | Default is to PERMIT if the policy matches. 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | IF exists and is equal 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | Does the policy-type attribute exist? 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | Get the size of policy-type attributes 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 0 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | Is this policy-type in the list? 18:36:03 policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | OSDF_CASABLANCA.Affinity_Default 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 0 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.923+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | Default is to PERMIT if the policy matches. 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | IF exists and is equal 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | Does the policy-type attribute exist? 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | Get the size of policy-type attributes 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 0 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | Is this policy-type in the list? 18:36:03 policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | OSDF_CASABLANCA.Affinity_Default 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 0 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.923+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:03 policy-xacml-pdp | /opt/app/policy/pdpx/apps/optimization/xacml.properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.924+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0} into application optimization 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.924+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"678eb842-8de7-4880-84c1-f110a1ff3c27","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"e85dcd01-b32e-47b7-bd0b-30c0aea4d73f","timestampMs":1750098875924,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.938+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"678eb842-8de7-4880-84c1-f110a1ff3c27","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"e85dcd01-b32e-47b7-bd0b-30c0aea4d73f","timestampMs":1750098875924,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:35.939+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.462+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.464+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:policy-type 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.465+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.465+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Initializing OnapPolicyFinderFactory Properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.465+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.466+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Loading policy file /opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.483+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Root Policies: 1 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.483+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Referenced Policies: 0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.483+00:00|INFO|StdPolicyFinder|qtp2014233765-30] Updating policy map with policy f6cfd002-116f-46f6-a44b-6b5fe64bb918 version 1.0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.483+00:00|INFO|StdPolicyFinder|qtp2014233765-30] Updating policy map with policy onap.restart.tca version 1.0.0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.501+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-30] Elapsed Time: 37ms 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.501+00:00|INFO|StdBaseTranslator|qtp2014233765-30] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=f6cfd002-116f-46f6-a44b-6b5fe64bb918,version=1.0}]}]} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.501+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Obligation: urn:org:onap:rest:body 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.501+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.501+00:00|INFO|MonitoringPdpApplication|qtp2014233765-30] Abbreviating decision results DecisionResponse(status=null, message=null, advice=null, obligations=null, policies={onap.restart.tca={type=onap.policies.monitoring.tcagen2, type_version=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}}, name=onap.restart.tca, version=1.0.0, metadata={policy-id=onap.restart.tca, policy-version=1.0.0}}}, attributes=null) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.504+00:00|INFO|RequestLog|qtp2014233765-30] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:59 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 146 "" "python-requests/2.32.4" 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.521+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.522+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:policy-type 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.523+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-30] Elapsed Time: 1ms 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.523+00:00|INFO|StdBaseTranslator|qtp2014233765-30] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=f6cfd002-116f-46f6-a44b-6b5fe64bb918,version=1.0}]}]} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.523+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Obligation: urn:org:onap:rest:body 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.524+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.524+00:00|INFO|MonitoringPdpApplication|qtp2014233765-30] Unsupported query param for Monitoring application: {null=[]} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.527+00:00|INFO|RequestLog|qtp2014233765-30] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:59 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1055 "" "python-requests/2.32.4" 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.542+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-33] Converting Request DecisionRequest(onapName=SDNC, onapComponent=SDNC-component, onapInstance=SDNC-component-instance, requestId=unique-request-sdnc-1, context=null, action=naming, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={nfRole=[], naming-type=[], property-name=[], policy-type=[onap.policies.Naming]}) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.543+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:resource:resource-id 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.543+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.543+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Initializing OnapPolicyFinderFactory Properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.543+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.544+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Loading policy file /opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.550+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Root Policies: 1 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.550+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Referenced Policies: 0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.550+00:00|INFO|StdPolicyFinder|qtp2014233765-33] Updating policy map with policy d657ae60-0cd3-416f-b2f6-ba07ac03ceaf version 1.0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.550+00:00|INFO|StdPolicyFinder|qtp2014233765-33] Updating policy map with policy SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP version 1.0.0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.552+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-33] Elapsed Time: 9ms 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.552+00:00|INFO|StdBaseTranslator|qtp2014233765-33] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component-instance}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:policy-type,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}],includeInResults=true}]}],policyIdentifiers=[{id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP,version=1.0.0}],policySetIdentifiers=[{id=d657ae60-0cd3-416f-b2f6-ba07ac03ceaf,version=1.0}]}]} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.552+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-33] Obligation: urn:org:onap:rest:body 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.552+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-33] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.556+00:00|INFO|RequestLog|qtp2014233765-33] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:59 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1598 "" "python-requests/2.32.4" 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.567+00:00|INFO|StdMatchableTranslator|qtp2014233765-31] Converting Request DecisionRequest(onapName=OOF, onapComponent=OOF-component, onapInstance=OOF-component-instance, requestId=null, context={subscriberName=[]}, action=optimize, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={scope=[], services=[], resources=[], geography=[]}) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.569+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-31] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.569+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-31] Initializing OnapPolicyFinderFactory Properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.569+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-31] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.570+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-31] Loading policy file /opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.576+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-31] Root Policies: 1 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.576+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-31] Referenced Policies: 0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.576+00:00|INFO|StdPolicyFinder|qtp2014233765-31] Updating policy map with policy 0d949b50-cf16-40f6-9c19-026e2fd2de1a version 1.0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.576+00:00|INFO|StdPolicyFinder|qtp2014233765-31] Updating policy map with policy OSDF_CASABLANCA.Affinity_Default version 1.0.0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.578+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-31] Elapsed Time: 9ms 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.578+00:00|INFO|StdBaseTranslator|qtp2014233765-31] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OSDF_CASABLANCA.Affinity_Default}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:weight,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#integer,value=0}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.optimization.resource.AffinityPolicy}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component-instance}],includeInResults=true}]}],policyIdentifiers=[{id=OSDF_CASABLANCA.Affinity_Default,version=1.0.0}],policySetIdentifiers=[{id=0d949b50-cf16-40f6-9c19-026e2fd2de1a,version=1.0}]}]} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.578+00:00|INFO|StdMatchableTranslator|qtp2014233765-31] Obligation: urn:org:onap:rest:body 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.578+00:00|INFO|StdMatchableTranslator|qtp2014233765-31] New entry onap.policies.optimization.resource.AffinityPolicy weight 0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.579+00:00|INFO|StdMatchableTranslator|qtp2014233765-31] Policy (OSDF_CASABLANCA.Affinity_Default,{type=onap.policies.optimization.resource.AffinityPolicy, type_version=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}, name=OSDF_CASABLANCA.Affinity_Default, version=1.0.0, metadata={policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0}}) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.580+00:00|INFO|RequestLog|qtp2014233765-31] 172.17.0.6 - policyadmin [16/Jun/2025:18:34:59 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 467 "" "python-requests/2.32.4" 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.968+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"source":"pap-df98c171-81af-48a2-b20e-6b7c42a0d39b","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"56415037-05c3-4c38-b9fb-020356e71e7c","timestampMs":1750098899940,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.968+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=56415037-05c3-4c38-b9fb-020356e71e7c, timestampMs=1750098899940, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-df98c171-81af-48a2-b20e-6b7c42a0d39b, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[], policiesToBeUndeployed=[onap.restart.tca 1.0.0]) 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.969+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.969+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.969+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.969+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.969+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.969+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:03 policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.970+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Unloaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} from application monitoring 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.971+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"56415037-05c3-4c38-b9fb-020356e71e7c","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"3270cf9c-3884-4825-aa2b-8edb8611600f","timestampMs":1750098899970,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.976+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"56415037-05c3-4c38-b9fb-020356e71e7c","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"3270cf9c-3884-4825-aa2b-8edb8611600f","timestampMs":1750098899970,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:34:59.976+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:03 policy-xacml-pdp | [2025-06-16T18:35:22.765+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=846e2fcb-c890-4d0f-a2c8-5f3e4f1941ca, timestampMs=1750098922765, name=xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=ACTIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0, OSDF_CASABLANCA.Affinity_Default 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) 18:36:03 policy-xacml-pdp | [2025-06-16T18:35:22.765+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"846e2fcb-c890-4d0f-a2c8-5f3e4f1941ca","timestampMs":1750098922765,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:35:22.775+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:03 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"846e2fcb-c890-4d0f-a2c8-5f3e4f1941ca","timestampMs":1750098922765,"name":"xacml-d56e6f84-fbfd-46ed-a4eb-f82f742362e4","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:03 policy-xacml-pdp | [2025-06-16T18:35:22.776+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:03 policy-xacml-pdp | [2025-06-16T18:35:35.580+00:00|INFO|RequestLog|qtp2014233765-32] 172.17.0.2 - policyadmin [16/Jun/2025:18:35:35 +0000] "GET /metrics HTTP/1.1" 200 2223 "" "Prometheus/3.4.1" 18:36:03 postgres | The files belonging to this database system will be owned by user "postgres". 18:36:03 postgres | This user must also own the server process. 18:36:03 postgres | 18:36:03 postgres | The database cluster will be initialized with locale "en_US.utf8". 18:36:03 postgres | The default database encoding has accordingly been set to "UTF8". 18:36:03 postgres | The default text search configuration will be set to "english". 18:36:03 postgres | 18:36:03 postgres | Data page checksums are disabled. 18:36:03 postgres | 18:36:03 postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok 18:36:03 postgres | creating subdirectories ... ok 18:36:03 postgres | selecting dynamic shared memory implementation ... posix 18:36:03 postgres | selecting default max_connections ... 100 18:36:03 postgres | selecting default shared_buffers ... 128MB 18:36:03 postgres | selecting default time zone ... Etc/UTC 18:36:03 postgres | creating configuration files ... ok 18:36:03 postgres | running bootstrap script ... ok 18:36:03 postgres | performing post-bootstrap initialization ... ok 18:36:03 postgres | syncing data to disk ... ok 18:36:03 postgres | 18:36:03 postgres | 18:36:03 postgres | Success. You can now start the database server using: 18:36:03 postgres | 18:36:03 postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start 18:36:03 postgres | 18:36:03 postgres | initdb: warning: enabling "trust" authentication for local connections 18:36:03 postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. 18:36:03 postgres | waiting for server to start....2025-06-16 18:32:39.305 UTC [49] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 18:36:03 postgres | 2025-06-16 18:32:39.307 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 18:36:03 postgres | 2025-06-16 18:32:39.311 UTC [52] LOG: database system was shut down at 2025-06-16 18:32:38 UTC 18:36:03 postgres | 2025-06-16 18:32:39.316 UTC [49] LOG: database system is ready to accept connections 18:36:03 postgres | done 18:36:03 postgres | server started 18:36:03 postgres | 18:36:03 postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf 18:36:03 postgres | 18:36:03 postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh 18:36:03 postgres | #!/bin/bash -xv 18:36:03 postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved 18:36:03 postgres | # 18:36:03 postgres | # Licensed under the Apache License, Version 2.0 (the "License"); 18:36:03 postgres | # you may not use this file except in compliance with the License. 18:36:03 postgres | # You may obtain a copy of the License at 18:36:03 postgres | # 18:36:03 postgres | # http://www.apache.org/licenses/LICENSE-2.0 18:36:03 postgres | # 18:36:03 postgres | # Unless required by applicable law or agreed to in writing, software 18:36:03 postgres | # distributed under the License is distributed on an "AS IS" BASIS, 18:36:03 postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 18:36:03 postgres | # See the License for the specific language governing permissions and 18:36:03 postgres | # limitations under the License. 18:36:03 postgres | 18:36:03 postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" 18:36:03 postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' 18:36:03 postgres | CREATE ROLE 18:36:03 postgres | 18:36:03 postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:03 postgres | do 18:36:03 postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" 18:36:03 postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" 18:36:03 postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" 18:36:03 postgres | done 18:36:03 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:03 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' 18:36:03 postgres | CREATE DATABASE 18:36:03 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' 18:36:03 postgres | ALTER DATABASE 18:36:03 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' 18:36:03 postgres | GRANT 18:36:03 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:03 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' 18:36:03 postgres | CREATE DATABASE 18:36:03 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' 18:36:03 postgres | ALTER DATABASE 18:36:03 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' 18:36:03 postgres | GRANT 18:36:03 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:03 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' 18:36:03 postgres | CREATE DATABASE 18:36:03 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' 18:36:03 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' 18:36:03 postgres | ALTER DATABASE 18:36:03 postgres | GRANT 18:36:03 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:03 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' 18:36:03 postgres | CREATE DATABASE 18:36:03 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' 18:36:03 postgres | ALTER DATABASE 18:36:03 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' 18:36:03 postgres | GRANT 18:36:03 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:03 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' 18:36:03 postgres | CREATE DATABASE 18:36:03 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' 18:36:03 postgres | ALTER DATABASE 18:36:03 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' 18:36:03 postgres | GRANT 18:36:03 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:03 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' 18:36:03 postgres | CREATE DATABASE 18:36:03 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' 18:36:03 postgres | ALTER DATABASE 18:36:03 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' 18:36:03 postgres | GRANT 18:36:03 postgres | 18:36:03 postgres | waiting for server to shut down...2025-06-16 18:32:40.835 UTC [49] LOG: received fast shutdown request 18:36:03 postgres | .2025-06-16 18:32:40.838 UTC [49] LOG: aborting any active transactions 18:36:03 postgres | 2025-06-16 18:32:40.839 UTC [49] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1 18:36:03 postgres | 2025-06-16 18:32:40.840 UTC [50] LOG: shutting down 18:36:03 postgres | 2025-06-16 18:32:40.842 UTC [50] LOG: checkpoint starting: shutdown immediate 18:36:03 postgres | 2025-06-16 18:32:41.294 UTC [50] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.356 s, sync=0.085 s, total=0.454 s; sync files=1788, longest=0.007 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 18:36:03 postgres | 2025-06-16 18:32:41.310 UTC [49] LOG: database system is shut down 18:36:03 postgres | done 18:36:03 postgres | server stopped 18:36:03 postgres | 18:36:03 postgres | PostgreSQL init process complete; ready for start up. 18:36:03 postgres | 18:36:03 postgres | 2025-06-16 18:32:41.359 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 18:36:03 postgres | 2025-06-16 18:32:41.360 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 18:36:03 postgres | 2025-06-16 18:32:41.360 UTC [1] LOG: listening on IPv6 address "::", port 5432 18:36:03 postgres | 2025-06-16 18:32:41.363 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 18:36:03 postgres | 2025-06-16 18:32:41.373 UTC [102] LOG: database system was shut down at 2025-06-16 18:32:41 UTC 18:36:03 postgres | 2025-06-16 18:32:41.378 UTC [1] LOG: database system is ready to accept connections 18:36:03 prometheus | time=2025-06-16T18:32:41.372Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d 18:36:03 prometheus | time=2025-06-16T18:32:41.372Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" 18:36:03 prometheus | time=2025-06-16T18:32:41.372Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" 18:36:03 prometheus | time=2025-06-16T18:32:41.376Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs 18:36:03 prometheus | time=2025-06-16T18:32:41.381Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 18:36:03 prometheus | time=2025-06-16T18:32:41.382Z level=INFO source=main.go:1266 msg="Starting TSDB ..." 18:36:03 prometheus | time=2025-06-16T18:32:41.384Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 18:36:03 prometheus | time=2025-06-16T18:32:41.384Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 18:36:03 prometheus | time=2025-06-16T18:32:41.385Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb 18:36:03 prometheus | time=2025-06-16T18:32:41.385Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.31µs 18:36:03 prometheus | time=2025-06-16T18:32:41.385Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb 18:36:03 prometheus | time=2025-06-16T18:32:41.386Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=173.151µs 18:36:03 prometheus | time=2025-06-16T18:32:41.386Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=20.21µs wal_replay_duration=188.601µs wbl_replay_duration=170ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.31µs total_replay_duration=254.861µs 18:36:03 prometheus | time=2025-06-16T18:32:41.392Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC 18:36:03 prometheus | time=2025-06-16T18:32:41.392Z level=INFO source=main.go:1290 msg="TSDB started" 18:36:03 prometheus | time=2025-06-16T18:32:41.392Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 18:36:03 prometheus | time=2025-06-16T18:32:41.393Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 18:36:03 prometheus | time=2025-06-16T18:32:41.393Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.47µs remote_storage=2.53µs web_handler=900ns query_engine=1.32µs scrape=194.483µs scrape_sd=146.041µs notify=190.991µs notify_sd=12.76µs rules=1.411µs tracing=19.03µs filename=/etc/prometheus/prometheus.yml totalDuration=1.22195ms 18:36:03 prometheus | time=2025-06-16T18:32:41.393Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." 18:36:03 prometheus | time=2025-06-16T18:32:41.394Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" 18:36:04 zookeeper | ===> User 18:36:04 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 18:36:04 zookeeper | ===> Configuring ... 18:36:04 zookeeper | ===> Running preflight checks ... 18:36:04 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 18:36:04 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 18:36:04 zookeeper | ===> Launching ... 18:36:04 zookeeper | ===> Launching zookeeper ... 18:36:04 zookeeper | [2025-06-16 18:32:39,410] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:04 zookeeper | [2025-06-16 18:32:39,412] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:04 zookeeper | [2025-06-16 18:32:39,412] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:04 zookeeper | [2025-06-16 18:32:39,412] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:04 zookeeper | [2025-06-16 18:32:39,412] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:04 zookeeper | [2025-06-16 18:32:39,413] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 18:36:04 zookeeper | [2025-06-16 18:32:39,413] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 18:36:04 zookeeper | [2025-06-16 18:32:39,413] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 18:36:04 zookeeper | [2025-06-16 18:32:39,413] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 18:36:04 zookeeper | [2025-06-16 18:32:39,414] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 18:36:04 zookeeper | [2025-06-16 18:32:39,415] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:04 zookeeper | [2025-06-16 18:32:39,415] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:04 zookeeper | [2025-06-16 18:32:39,415] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:04 zookeeper | [2025-06-16 18:32:39,415] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:04 zookeeper | [2025-06-16 18:32:39,415] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:04 zookeeper | [2025-06-16 18:32:39,415] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 18:36:04 zookeeper | [2025-06-16 18:32:39,428] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) 18:36:04 zookeeper | [2025-06-16 18:32:39,430] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 18:36:04 zookeeper | [2025-06-16 18:32:39,430] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 18:36:04 zookeeper | [2025-06-16 18:32:39,432] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 18:36:04 zookeeper | [2025-06-16 18:32:39,444] INFO (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,444] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,445] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,445] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,445] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,445] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,445] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,445] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,445] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,445] INFO (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,446] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,447] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,447] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,447] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,447] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,447] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,447] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,447] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,448] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 18:36:04 zookeeper | [2025-06-16 18:32:39,449] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,449] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,452] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 18:36:04 zookeeper | [2025-06-16 18:32:39,452] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 18:36:04 zookeeper | [2025-06-16 18:32:39,453] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 18:36:04 zookeeper | [2025-06-16 18:32:39,453] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 18:36:04 zookeeper | [2025-06-16 18:32:39,453] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 18:36:04 zookeeper | [2025-06-16 18:32:39,453] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 18:36:04 zookeeper | [2025-06-16 18:32:39,453] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 18:36:04 zookeeper | [2025-06-16 18:32:39,453] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 18:36:04 zookeeper | [2025-06-16 18:32:39,455] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,455] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,456] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 18:36:04 zookeeper | [2025-06-16 18:32:39,456] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 18:36:04 zookeeper | [2025-06-16 18:32:39,456] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,477] INFO Logging initialized @385ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 18:36:04 zookeeper | [2025-06-16 18:32:39,533] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 18:36:04 zookeeper | [2025-06-16 18:32:39,533] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 18:36:04 zookeeper | [2025-06-16 18:32:39,547] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) 18:36:04 zookeeper | [2025-06-16 18:32:39,576] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 18:36:04 zookeeper | [2025-06-16 18:32:39,576] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 18:36:04 zookeeper | [2025-06-16 18:32:39,577] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 18:36:04 zookeeper | [2025-06-16 18:32:39,580] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 18:36:04 zookeeper | [2025-06-16 18:32:39,588] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 18:36:04 zookeeper | [2025-06-16 18:32:39,597] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 18:36:04 zookeeper | [2025-06-16 18:32:39,597] INFO Started @509ms (org.eclipse.jetty.server.Server) 18:36:04 zookeeper | [2025-06-16 18:32:39,597] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,600] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 18:36:04 zookeeper | [2025-06-16 18:32:39,601] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 18:36:04 zookeeper | [2025-06-16 18:32:39,602] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 18:36:04 zookeeper | [2025-06-16 18:32:39,602] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 18:36:04 zookeeper | [2025-06-16 18:32:39,615] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 18:36:04 zookeeper | [2025-06-16 18:32:39,615] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 18:36:04 zookeeper | [2025-06-16 18:32:39,615] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 18:36:04 zookeeper | [2025-06-16 18:32:39,616] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 18:36:04 zookeeper | [2025-06-16 18:32:39,621] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 18:36:04 zookeeper | [2025-06-16 18:32:39,621] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 18:36:04 zookeeper | [2025-06-16 18:32:39,625] INFO Snapshot loaded in 10 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 18:36:04 zookeeper | [2025-06-16 18:32:39,626] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 18:36:04 zookeeper | [2025-06-16 18:32:39,627] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 18:36:04 zookeeper | [2025-06-16 18:32:39,633] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 18:36:04 zookeeper | [2025-06-16 18:32:39,634] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 18:36:04 zookeeper | [2025-06-16 18:32:39,646] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 18:36:04 zookeeper | [2025-06-16 18:32:39,647] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 18:36:04 zookeeper | [2025-06-16 18:32:40,589] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 18:36:04 Tearing down containers... 18:36:04 Container grafana Stopping 18:36:04 Container policy-xacml-pdp Stopping 18:36:04 Container policy-csit Stopping 18:36:04 Container policy-csit Stopped 18:36:04 Container policy-csit Removing 18:36:04 Container policy-csit Removed 18:36:04 Container grafana Stopped 18:36:04 Container grafana Removing 18:36:04 Container grafana Removed 18:36:04 Container prometheus Stopping 18:36:04 Container prometheus Stopped 18:36:04 Container prometheus Removing 18:36:04 Container prometheus Removed 18:36:14 Container policy-xacml-pdp Stopped 18:36:14 Container policy-xacml-pdp Removing 18:36:14 Container policy-xacml-pdp Removed 18:36:14 Container policy-pap Stopping 18:36:24 Container policy-pap Stopped 18:36:24 Container policy-pap Removing 18:36:24 Container policy-pap Removed 18:36:24 Container policy-api Stopping 18:36:24 Container kafka Stopping 18:36:25 Container kafka Stopped 18:36:25 Container kafka Removing 18:36:25 Container kafka Removed 18:36:25 Container zookeeper Stopping 18:36:26 Container zookeeper Stopped 18:36:26 Container zookeeper Removing 18:36:26 Container zookeeper Removed 18:36:35 Container policy-api Stopped 18:36:35 Container policy-api Removing 18:36:35 Container policy-api Removed 18:36:35 Container policy-db-migrator Stopping 18:36:35 Container policy-db-migrator Stopped 18:36:35 Container policy-db-migrator Removing 18:36:35 Container policy-db-migrator Removed 18:36:35 Container postgres Stopping 18:36:35 Container postgres Stopped 18:36:35 Container postgres Removing 18:36:35 Container postgres Removed 18:36:35 Network compose_default Removing 18:36:35 Network compose_default Removed 18:36:35 $ ssh-agent -k 18:36:35 unset SSH_AUTH_SOCK; 18:36:35 unset SSH_AGENT_PID; 18:36:35 echo Agent pid 2073 killed; 18:36:35 [ssh-agent] Stopped. 18:36:35 Robot results publisher started... 18:36:35 INFO: Checking test criticality is deprecated and will be dropped in a future release! 18:36:35 -Parsing output xml: 18:36:36 Done! 18:36:36 -Copying log files to build dir: 18:36:36 Done! 18:36:36 -Assigning results to build: 18:36:36 Done! 18:36:36 -Checking thresholds: 18:36:36 Done! 18:36:36 Done publishing Robot results. 18:36:36 [PostBuildScript] - [INFO] Executing post build scripts. 18:36:36 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins633225534144676375.sh 18:36:36 ---> sysstat.sh 18:36:36 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins3491583075420671688.sh 18:36:36 ---> package-listing.sh 18:36:36 ++ facter osfamily 18:36:36 ++ tr '[:upper:]' '[:lower:]' 18:36:37 + OS_FAMILY=debian 18:36:37 + workspace=/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp 18:36:37 + START_PACKAGES=/tmp/packages_start.txt 18:36:37 + END_PACKAGES=/tmp/packages_end.txt 18:36:37 + DIFF_PACKAGES=/tmp/packages_diff.txt 18:36:37 + PACKAGES=/tmp/packages_start.txt 18:36:37 + '[' /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp ']' 18:36:37 + PACKAGES=/tmp/packages_end.txt 18:36:37 + case "${OS_FAMILY}" in 18:36:37 + dpkg -l 18:36:37 + grep '^ii' 18:36:37 + '[' -f /tmp/packages_start.txt ']' 18:36:37 + '[' -f /tmp/packages_end.txt ']' 18:36:37 + diff /tmp/packages_start.txt /tmp/packages_end.txt 18:36:37 + '[' /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp ']' 18:36:37 + mkdir -p /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/archives/ 18:36:37 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/archives/ 18:36:37 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins9649094732095286624.sh 18:36:37 ---> capture-instance-metadata.sh 18:36:37 Setup pyenv: 18:36:37 system 18:36:37 3.8.13 18:36:37 3.9.13 18:36:37 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) 18:36:37 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ytGL from file:/tmp/.os_lf_venv 18:36:39 lf-activate-venv(): INFO: Installing: lftools 18:36:47 lf-activate-venv(): INFO: Adding /tmp/venv-ytGL/bin to PATH 18:36:47 INFO: Running in OpenStack, capturing instance metadata 18:36:48 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins1853916147953942266.sh 18:36:48 provisioning config files... 18:36:48 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp@tmp/config14512127637261109072tmp 18:36:48 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 18:36:48 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 18:36:48 [EnvInject] - Injecting environment variables from a build step. 18:36:48 [EnvInject] - Injecting as environment variables the properties content 18:36:48 SERVER_ID=logs 18:36:48 18:36:48 [EnvInject] - Variables injected successfully. 18:36:48 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins3496005532926683194.sh 18:36:48 ---> create-netrc.sh 18:36:48 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins17629765988766509521.sh 18:36:48 ---> python-tools-install.sh 18:36:48 Setup pyenv: 18:36:48 system 18:36:48 3.8.13 18:36:48 3.9.13 18:36:48 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) 18:36:48 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ytGL from file:/tmp/.os_lf_venv 18:36:50 lf-activate-venv(): INFO: Installing: lftools 18:36:58 lf-activate-venv(): INFO: Adding /tmp/venv-ytGL/bin to PATH 18:36:58 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins15043706426140578330.sh 18:36:58 ---> sudo-logs.sh 18:36:58 Archiving 'sudo' log.. 18:36:58 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins6598883033032786703.sh 18:36:58 ---> job-cost.sh 18:36:58 Setup pyenv: 18:36:58 system 18:36:58 3.8.13 18:36:58 3.9.13 18:36:58 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) 18:36:58 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ytGL from file:/tmp/.os_lf_venv 18:37:00 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 18:37:05 lf-activate-venv(): INFO: Adding /tmp/venv-ytGL/bin to PATH 18:37:05 INFO: No Stack... 18:37:05 INFO: Retrieving Pricing Info for: v3-standard-8 18:37:06 INFO: Archiving Costs 18:37:06 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash -l /tmp/jenkins3295918368526128711.sh 18:37:06 ---> logs-deploy.sh 18:37:06 Setup pyenv: 18:37:06 system 18:37:06 3.8.13 18:37:06 3.9.13 18:37:06 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) 18:37:06 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ytGL from file:/tmp/.os_lf_venv 18:37:08 lf-activate-venv(): INFO: Installing: lftools 18:37:16 lf-activate-venv(): INFO: Adding /tmp/venv-ytGL/bin to PATH 18:37:16 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-xacml-pdp-master-project-csit-xacml-pdp/2012 18:37:16 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 18:37:17 Archives upload complete. 18:37:17 INFO: archiving logs to Nexus 18:37:18 ---> uname -a: 18:37:18 Linux prd-ubuntu1804-docker-8c-8g-21665 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 18:37:18 18:37:18 18:37:18 ---> lscpu: 18:37:18 Architecture: x86_64 18:37:18 CPU op-mode(s): 32-bit, 64-bit 18:37:18 Byte Order: Little Endian 18:37:18 CPU(s): 8 18:37:18 On-line CPU(s) list: 0-7 18:37:18 Thread(s) per core: 1 18:37:18 Core(s) per socket: 1 18:37:18 Socket(s): 8 18:37:18 NUMA node(s): 1 18:37:18 Vendor ID: AuthenticAMD 18:37:18 CPU family: 23 18:37:18 Model: 49 18:37:18 Model name: AMD EPYC-Rome Processor 18:37:18 Stepping: 0 18:37:18 CPU MHz: 2799.998 18:37:18 BogoMIPS: 5599.99 18:37:18 Virtualization: AMD-V 18:37:18 Hypervisor vendor: KVM 18:37:18 Virtualization type: full 18:37:18 L1d cache: 32K 18:37:18 L1i cache: 32K 18:37:18 L2 cache: 512K 18:37:18 L3 cache: 16384K 18:37:18 NUMA node0 CPU(s): 0-7 18:37:18 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 18:37:18 18:37:18 18:37:18 ---> nproc: 18:37:18 8 18:37:18 18:37:18 18:37:18 ---> df -h: 18:37:18 Filesystem Size Used Avail Use% Mounted on 18:37:18 udev 16G 0 16G 0% /dev 18:37:18 tmpfs 3.2G 708K 3.2G 1% /run 18:37:18 /dev/vda1 155G 15G 141G 10% / 18:37:18 tmpfs 16G 0 16G 0% /dev/shm 18:37:18 tmpfs 5.0M 0 5.0M 0% /run/lock 18:37:18 tmpfs 16G 0 16G 0% /sys/fs/cgroup 18:37:18 /dev/vda15 105M 4.4M 100M 5% /boot/efi 18:37:18 tmpfs 3.2G 0 3.2G 0% /run/user/1001 18:37:18 18:37:18 18:37:18 ---> free -m: 18:37:18 total used free shared buff/cache available 18:37:18 Mem: 32167 890 24274 0 7002 30821 18:37:18 Swap: 1023 0 1023 18:37:18 18:37:18 18:37:18 ---> ip addr: 18:37:18 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 18:37:18 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 18:37:18 inet 127.0.0.1/8 scope host lo 18:37:18 valid_lft forever preferred_lft forever 18:37:18 inet6 ::1/128 scope host 18:37:18 valid_lft forever preferred_lft forever 18:37:18 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 18:37:18 link/ether fa:16:3e:b3:5e:05 brd ff:ff:ff:ff:ff:ff 18:37:18 inet 10.30.106.152/23 brd 10.30.107.255 scope global dynamic ens3 18:37:18 valid_lft 85970sec preferred_lft 85970sec 18:37:18 inet6 fe80::f816:3eff:feb3:5e05/64 scope link 18:37:18 valid_lft forever preferred_lft forever 18:37:18 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 18:37:18 link/ether 02:42:ad:d4:77:75 brd ff:ff:ff:ff:ff:ff 18:37:18 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 18:37:18 valid_lft forever preferred_lft forever 18:37:18 inet6 fe80::42:adff:fed4:7775/64 scope link 18:37:18 valid_lft forever preferred_lft forever 18:37:18 18:37:18 18:37:18 ---> sar -b -r -n DEV: 18:37:18 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21665) 06/16/25 _x86_64_ (8 CPU) 18:37:18 18:37:18 18:30:09 LINUX RESTART (8 CPU) 18:37:18 18:37:18 18:31:02 tps rtps wtps bread/s bwrtn/s 18:37:18 18:32:02 223.85 23.23 200.62 2343.48 54154.28 18:37:18 18:33:01 604.27 7.74 596.53 470.70 179278.63 18:37:18 18:34:01 148.76 0.12 148.64 13.46 41999.27 18:37:18 18:35:01 116.03 0.32 115.71 31.06 40710.55 18:37:18 18:36:01 22.56 0.00 22.56 0.00 23169.47 18:37:18 18:37:01 82.00 1.32 80.69 98.25 24964.91 18:37:18 Average: 198.41 5.42 192.99 490.00 60392.37 18:37:18 18:37:18 18:31:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 18:37:18 18:32:02 26015116 31556400 6924104 21.02 106764 5632724 2261728 6.65 1057272 5410688 3243580 18:37:18 18:33:01 24275944 30600632 8663276 26.30 158088 6272436 6842756 20.13 2217104 5822884 240 18:37:18 18:34:01 22851284 29737400 10087936 30.63 178536 6772156 8203264 24.14 3184952 6215072 20316 18:37:18 18:35:01 22571420 29528708 10367800 31.48 200216 6809328 8705820 25.61 3448284 6219148 532 18:37:18 18:36:01 22620620 29577812 10318600 31.33 200380 6810096 8424480 24.79 3408464 6213188 124 18:37:18 18:37:01 24898460 31598344 8040760 24.41 202144 6545620 1575812 4.64 1448636 5970516 11912 18:37:18 Average: 23872141 30433216 9067079 27.53 174355 6473727 6002310 17.66 2460785 5975249 546117 18:37:18 18:37:18 18:31:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 18:37:18 18:32:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:37:18 18:32:02 ens3 1232.63 752.72 33236.76 63.26 0.00 0.00 0.00 0.00 18:37:18 18:32:02 lo 13.93 13.93 1.31 1.31 0.00 0.00 0.00 0.00 18:37:18 18:33:01 veth01f20ea 0.00 0.19 0.00 0.01 0.00 0.00 0.00 0.00 18:37:18 18:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:37:18 18:33:01 veth0cd73b5 0.37 0.54 0.03 0.03 0.00 0.00 0.00 0.00 18:37:18 18:33:01 vethf4c97aa 2.59 2.37 0.31 0.30 0.00 0.00 0.00 0.00 18:37:18 18:34:01 veth01f20ea 0.45 0.50 0.05 1.00 0.00 0.00 0.00 0.00 18:37:18 18:34:01 docker0 100.22 135.56 5.35 1053.19 0.00 0.00 0.00 0.00 18:37:18 18:34:01 veth0cd73b5 4.05 5.08 0.65 0.53 0.00 0.00 0.00 0.00 18:37:18 18:34:01 vethf4c97aa 89.32 89.34 15.73 18.33 0.00 0.00 0.00 0.00 18:37:18 18:35:01 veth01f20ea 0.50 0.62 0.05 1.26 0.00 0.00 0.00 0.00 18:37:18 18:35:01 docker0 42.81 62.71 3.69 296.13 0.00 0.00 0.00 0.00 18:37:18 18:35:01 vethe424d93 1.87 1.68 0.67 0.49 0.00 0.00 0.00 0.00 18:37:18 18:35:01 veth0cd73b5 4.67 6.15 0.96 0.72 0.00 0.00 0.00 0.00 18:37:18 18:36:01 veth01f20ea 0.80 0.93 0.09 1.32 0.00 0.00 0.00 0.00 18:37:18 18:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:37:18 18:36:01 veth0cd73b5 3.58 5.03 0.57 0.39 0.00 0.00 0.00 0.00 18:37:18 18:36:01 vethf4c97aa 222.33 221.78 31.57 46.48 0.00 0.00 0.00 0.00 18:37:18 18:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:37:18 18:37:01 ens3 1973.32 1267.64 36510.51 187.94 0.00 0.00 0.00 0.00 18:37:18 18:37:01 lo 26.70 26.70 2.40 2.40 0.00 0.00 0.00 0.00 18:37:18 Average: docker0 23.94 33.19 1.51 225.86 0.00 0.00 0.00 0.00 18:37:18 Average: ens3 268.13 171.50 5945.08 19.55 0.00 0.00 0.00 0.00 18:37:18 Average: lo 3.78 3.78 0.34 0.34 0.00 0.00 0.00 0.00 18:37:18 18:37:18 18:37:18 ---> sar -P ALL: 18:37:18 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21665) 06/16/25 _x86_64_ (8 CPU) 18:37:18 18:37:18 18:30:09 LINUX RESTART (8 CPU) 18:37:18 18:37:18 18:31:02 CPU %user %nice %system %iowait %steal %idle 18:37:18 18:32:02 all 15.13 0.00 3.96 3.43 0.05 77.43 18:37:18 18:32:02 0 8.48 0.00 4.03 0.95 0.05 86.50 18:37:18 18:32:02 1 38.81 0.00 4.65 6.38 0.08 50.08 18:37:18 18:32:02 2 11.53 0.00 3.91 4.91 0.05 79.60 18:37:18 18:32:02 3 15.24 0.00 3.94 0.80 0.07 79.95 18:37:18 18:32:02 4 11.09 0.00 3.39 8.13 0.05 77.35 18:37:18 18:32:02 5 8.02 0.00 3.77 2.55 0.03 85.62 18:37:18 18:32:02 6 8.39 0.00 3.25 0.61 0.03 87.72 18:37:18 18:32:02 7 19.42 0.00 4.80 3.14 0.07 72.57 18:37:18 18:33:01 all 17.42 0.00 4.92 11.30 0.06 66.30 18:37:18 18:33:01 0 22.02 0.00 5.20 3.15 0.07 69.56 18:37:18 18:33:01 1 17.20 0.00 4.90 4.24 0.05 73.61 18:37:18 18:33:01 2 17.78 0.00 4.61 3.28 0.05 74.28 18:37:18 18:33:01 3 16.88 0.00 4.58 6.08 0.05 72.41 18:37:18 18:33:01 4 14.02 0.00 3.88 15.12 0.05 66.94 18:37:18 18:33:01 5 17.88 0.00 5.32 7.86 0.07 68.87 18:37:18 18:33:01 6 15.91 0.00 6.14 42.29 0.07 35.58 18:37:18 18:33:01 7 17.66 0.00 4.66 8.56 0.05 69.07 18:37:18 18:34:01 all 19.21 0.00 2.24 1.88 0.07 76.60 18:37:18 18:34:01 0 14.58 0.00 2.08 2.48 0.05 80.82 18:37:18 18:34:01 1 15.48 0.00 2.35 0.18 0.07 81.92 18:37:18 18:34:01 2 22.55 0.00 2.37 0.22 0.07 74.80 18:37:18 18:34:01 3 27.56 0.00 2.28 2.77 0.08 67.31 18:37:18 18:34:01 4 17.03 0.00 1.73 7.37 0.08 73.78 18:37:18 18:34:01 5 19.19 0.00 2.17 0.03 0.05 78.55 18:37:18 18:34:01 6 15.80 0.00 1.82 0.94 0.07 81.37 18:37:18 18:34:01 7 21.51 0.00 3.11 1.05 0.08 74.23 18:37:18 18:35:01 all 9.68 0.00 1.84 2.34 0.06 86.09 18:37:18 18:35:01 0 8.57 0.00 1.34 0.32 0.05 89.72 18:37:18 18:35:01 1 11.55 0.00 2.12 0.12 0.07 86.14 18:37:18 18:35:01 2 14.59 0.00 2.09 1.04 0.07 82.21 18:37:18 18:35:01 3 10.03 0.00 1.97 5.08 0.05 82.87 18:37:18 18:35:01 4 7.75 0.00 2.07 3.24 0.07 86.88 18:37:18 18:35:01 5 8.47 0.00 2.28 0.10 0.05 89.11 18:37:18 18:35:01 6 8.30 0.00 1.41 7.69 0.07 82.53 18:37:18 18:35:01 7 8.17 0.00 1.49 1.14 0.05 89.16 18:37:18 18:36:01 all 0.93 0.00 0.23 0.94 0.04 97.86 18:37:18 18:36:01 0 0.75 0.00 0.15 0.02 0.03 99.05 18:37:18 18:36:01 1 0.73 0.00 0.37 0.02 0.03 98.85 18:37:18 18:36:01 2 0.82 0.00 0.17 0.02 0.05 98.95 18:37:18 18:36:01 3 1.27 0.00 0.18 0.02 0.05 98.48 18:37:18 18:36:01 4 1.12 0.00 0.35 0.13 0.02 98.37 18:37:18 18:36:01 5 0.98 0.00 0.15 0.02 0.03 98.81 18:37:18 18:36:01 6 1.07 0.00 0.30 7.28 0.05 91.30 18:37:18 18:36:01 7 0.67 0.00 0.17 0.00 0.03 99.13 18:37:18 18:37:01 all 5.86 0.00 0.81 1.21 0.03 92.09 18:37:18 18:37:01 0 1.18 0.00 0.95 0.08 0.03 97.75 18:37:18 18:37:01 1 13.99 0.00 0.82 0.17 0.03 85.00 18:37:18 18:37:01 2 1.99 0.00 0.55 0.99 0.02 96.46 18:37:18 18:37:01 3 6.89 0.00 0.99 0.08 0.05 91.99 18:37:18 18:37:01 4 1.58 0.00 0.58 0.60 0.02 97.22 18:37:18 18:37:01 5 3.86 0.00 0.87 0.18 0.03 95.06 18:37:18 18:37:01 6 14.83 0.00 1.13 0.43 0.05 83.55 18:37:18 18:37:01 7 2.64 0.00 0.56 7.10 0.03 89.66 18:37:18 Average: all 11.34 0.00 2.32 3.49 0.05 82.81 18:37:18 Average: 0 9.22 0.00 2.28 1.16 0.05 87.30 18:37:18 Average: 1 16.24 0.00 2.52 1.83 0.06 79.35 18:37:18 Average: 2 11.52 0.00 2.27 1.73 0.05 84.43 18:37:18 Average: 3 12.95 0.00 2.31 2.46 0.06 82.21 18:37:18 Average: 4 8.73 0.00 1.99 5.72 0.05 83.52 18:37:18 Average: 5 9.70 0.00 2.41 1.76 0.04 86.08 18:37:18 Average: 6 10.70 0.00 2.32 9.75 0.06 77.17 18:37:18 Average: 7 11.62 0.00 2.45 3.49 0.05 82.39 18:37:18 18:37:18 18:37:18