18:30:45 Started by timer 18:30:45 Running as SYSTEM 18:30:45 [EnvInject] - Loading node environment variables. 18:30:45 Building remotely on prd-ubuntu1804-docker-8c-8g-21442 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp 18:30:45 [ssh-agent] Looking for ssh-agent implementation... 18:30:45 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 18:30:45 $ ssh-agent 18:30:45 SSH_AUTH_SOCK=/tmp/ssh-xvJXDFD4kYHU/agent.2087 18:30:45 SSH_AGENT_PID=2089 18:30:45 [ssh-agent] Started. 18:30:45 Running ssh-add (command line suppressed) 18:30:45 Identity added: /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp@tmp/private_key_18258624310801378670.key (/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp@tmp/private_key_18258624310801378670.key) 18:30:45 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 18:30:45 The recommended git tool is: NONE 18:30:46 using credential onap-jenkins-ssh 18:30:46 Wiping out workspace first. 18:30:46 Cloning the remote Git repository 18:30:46 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 18:30:46 > git init /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp # timeout=10 18:30:46 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 18:30:46 > git --version # timeout=10 18:30:46 > git --version # 'git version 2.17.1' 18:30:46 using GIT_SSH to set credentials Gerrit user 18:30:46 Verifying host key using manually-configured host key entries 18:30:46 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 18:30:47 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 18:30:47 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 18:30:47 Avoid second fetch 18:30:47 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 18:30:47 Checking out Revision 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c (refs/remotes/origin/master) 18:30:47 > git config core.sparsecheckout # timeout=10 18:30:47 > git checkout -f 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=30 18:30:48 Commit message: "Remove VFC from docker compose and helm configurations" 18:30:48 > git rev-list --no-walk 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=10 18:30:51 provisioning config files... 18:30:51 copy managed file [npmrc] to file:/home/jenkins/.npmrc 18:30:51 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 18:30:51 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins4377134326764215690.sh 18:30:51 ---> python-tools-install.sh 18:30:51 Setup pyenv: 18:30:51 * system (set by /opt/pyenv/version) 18:30:51 * 3.8.13 (set by /opt/pyenv/version) 18:30:51 * 3.9.13 (set by /opt/pyenv/version) 18:30:51 * 3.10.6 (set by /opt/pyenv/version) 18:30:55 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-pIkU 18:30:55 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 18:31:00 lf-activate-venv(): INFO: Installing: lftools 18:31:23 lf-activate-venv(): INFO: Adding /tmp/venv-pIkU/bin to PATH 18:31:23 Generating Requirements File 18:31:42 Python 3.10.6 18:31:43 pip 25.1.1 from /tmp/venv-pIkU/lib/python3.10/site-packages/pip (python 3.10) 18:31:43 appdirs==1.4.4 18:31:43 argcomplete==3.6.2 18:31:43 aspy.yaml==1.3.0 18:31:43 attrs==25.3.0 18:31:43 autopage==0.5.2 18:31:43 beautifulsoup4==4.13.4 18:31:43 boto3==1.38.36 18:31:43 botocore==1.38.36 18:31:43 bs4==0.0.2 18:31:43 cachetools==5.5.2 18:31:43 certifi==2025.6.15 18:31:43 cffi==1.17.1 18:31:43 cfgv==3.4.0 18:31:43 chardet==5.2.0 18:31:43 charset-normalizer==3.4.2 18:31:43 click==8.2.1 18:31:43 cliff==4.10.0 18:31:43 cmd2==2.6.1 18:31:43 cryptography==3.3.2 18:31:43 debtcollector==3.0.0 18:31:43 decorator==5.2.1 18:31:43 defusedxml==0.7.1 18:31:43 Deprecated==1.2.18 18:31:43 distlib==0.3.9 18:31:43 dnspython==2.7.0 18:31:43 docker==7.1.0 18:31:43 dogpile.cache==1.4.0 18:31:43 durationpy==0.10 18:31:43 email_validator==2.2.0 18:31:43 filelock==3.18.0 18:31:43 future==1.0.0 18:31:43 gitdb==4.0.12 18:31:43 GitPython==3.1.44 18:31:43 google-auth==2.40.3 18:31:43 httplib2==0.22.0 18:31:43 identify==2.6.12 18:31:43 idna==3.10 18:31:43 importlib-resources==1.5.0 18:31:43 iso8601==2.1.0 18:31:43 Jinja2==3.1.6 18:31:43 jmespath==1.0.1 18:31:43 jsonpatch==1.33 18:31:43 jsonpointer==3.0.0 18:31:43 jsonschema==4.24.0 18:31:43 jsonschema-specifications==2025.4.1 18:31:43 keystoneauth1==5.11.1 18:31:43 kubernetes==33.1.0 18:31:43 lftools==0.37.13 18:31:43 lxml==5.4.0 18:31:43 MarkupSafe==3.0.2 18:31:43 msgpack==1.1.1 18:31:43 multi_key_dict==2.0.3 18:31:43 munch==4.0.0 18:31:43 netaddr==1.3.0 18:31:43 niet==1.4.2 18:31:43 nodeenv==1.9.1 18:31:43 oauth2client==4.1.3 18:31:43 oauthlib==3.2.2 18:31:43 openstacksdk==4.6.0 18:31:43 os-client-config==2.1.0 18:31:43 os-service-types==1.7.0 18:31:43 osc-lib==4.0.2 18:31:43 oslo.config==9.8.0 18:31:43 oslo.context==6.0.0 18:31:43 oslo.i18n==6.5.1 18:31:43 oslo.log==7.1.0 18:31:43 oslo.serialization==5.7.0 18:31:43 oslo.utils==9.0.0 18:31:43 packaging==25.0 18:31:43 pbr==6.1.1 18:31:43 platformdirs==4.3.8 18:31:43 prettytable==3.16.0 18:31:43 psutil==7.0.0 18:31:43 pyasn1==0.6.1 18:31:43 pyasn1_modules==0.4.2 18:31:43 pycparser==2.22 18:31:43 pygerrit2==2.0.15 18:31:43 PyGithub==2.6.1 18:31:43 PyJWT==2.10.1 18:31:43 PyNaCl==1.5.0 18:31:43 pyparsing==2.4.7 18:31:43 pyperclip==1.9.0 18:31:43 pyrsistent==0.20.0 18:31:43 python-cinderclient==9.7.0 18:31:43 python-dateutil==2.9.0.post0 18:31:43 python-heatclient==4.2.0 18:31:43 python-jenkins==1.8.2 18:31:43 python-keystoneclient==5.6.0 18:31:43 python-magnumclient==4.8.1 18:31:43 python-openstackclient==8.1.0 18:31:43 python-swiftclient==4.8.0 18:31:43 PyYAML==6.0.2 18:31:43 referencing==0.36.2 18:31:43 requests==2.32.4 18:31:43 requests-oauthlib==2.0.0 18:31:43 requestsexceptions==1.4.0 18:31:43 rfc3986==2.0.0 18:31:43 rpds-py==0.25.1 18:31:43 rsa==4.9.1 18:31:43 ruamel.yaml==0.18.14 18:31:43 ruamel.yaml.clib==0.2.12 18:31:43 s3transfer==0.13.0 18:31:43 simplejson==3.20.1 18:31:43 six==1.17.0 18:31:43 smmap==5.0.2 18:31:43 soupsieve==2.7 18:31:43 stevedore==5.4.1 18:31:43 tabulate==0.9.0 18:31:43 toml==0.10.2 18:31:43 tomlkit==0.13.3 18:31:43 tqdm==4.67.1 18:31:43 typing_extensions==4.14.0 18:31:43 tzdata==2025.2 18:31:43 urllib3==1.26.20 18:31:43 virtualenv==20.31.2 18:31:43 wcwidth==0.2.13 18:31:43 websocket-client==1.8.0 18:31:43 wrapt==1.17.2 18:31:43 xdg==6.0.0 18:31:43 xmltodict==0.14.2 18:31:43 yq==3.4.3 18:31:43 [EnvInject] - Injecting environment variables from a build step. 18:31:43 [EnvInject] - Injecting as environment variables the properties content 18:31:43 SET_JDK_VERSION=openjdk17 18:31:43 GIT_URL="git://cloud.onap.org/mirror" 18:31:43 18:31:43 [EnvInject] - Variables injected successfully. 18:31:43 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/sh /tmp/jenkins14955638234766766722.sh 18:31:43 ---> update-java-alternatives.sh 18:31:43 ---> Updating Java version 18:31:43 ---> Ubuntu/Debian system detected 18:31:43 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 18:31:43 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 18:31:43 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 18:31:44 openjdk version "17.0.4" 2022-07-19 18:31:44 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 18:31:44 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 18:31:44 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 18:31:44 [EnvInject] - Injecting environment variables from a build step. 18:31:44 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 18:31:44 [EnvInject] - Variables injected successfully. 18:31:44 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/sh -xe /tmp/jenkins3925107119235682358.sh 18:31:44 + /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/csit/run-project-csit.sh xacml-pdp 18:31:44 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 18:31:44 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 18:31:44 Configure a credential helper to remove this warning. See 18:31:44 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 18:31:44 18:31:44 Login Succeeded 18:31:44 docker: 'compose' is not a docker command. 18:31:44 See 'docker --help' 18:31:44 Docker Compose Plugin not installed. Installing now... 18:31:44 % Total % Received % Xferd Average Speed Time Time Time Current 18:31:44 Dload Upload Total Spent Left Speed 18:31:44 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 18:31:45 96 60.2M 96 57.9M 0 0 60.2M 0 0:00:01 --:--:-- 0:00:01 60.2M 100 60.2M 100 60.2M 0 0 59.5M 0 0:00:01 0:00:01 --:--:-- 46.7M 18:31:45 Setting project configuration for: xacml-pdp 18:31:45 Configuring docker compose... 18:31:47 Starting xacml-pdp using postgres + Grafana/Prometheus 18:31:47 grafana Pulling 18:31:47 prometheus Pulling 18:31:47 xacml-pdp Pulling 18:31:47 postgres Pulling 18:31:47 kafka Pulling 18:31:47 api Pulling 18:31:47 pap Pulling 18:31:47 zookeeper Pulling 18:31:47 policy-db-migrator Pulling 18:31:47 da9db072f522 Pulling fs layer 18:31:47 96e38c8865ba Pulling fs layer 18:31:47 795b910b71c0 Pulling fs layer 18:31:47 d1bdb495a7aa Pulling fs layer 18:31:47 0444d3911dbb Pulling fs layer 18:31:47 b801adf990e2 Pulling fs layer 18:31:47 d1bdb495a7aa Waiting 18:31:47 0444d3911dbb Waiting 18:31:47 b801adf990e2 Waiting 18:31:48 da9db072f522 Pulling fs layer 18:31:48 96e38c8865ba Pulling fs layer 18:31:48 e5d7009d9e55 Pulling fs layer 18:31:48 1ec5fb03eaee Pulling fs layer 18:31:48 d3165a332ae3 Pulling fs layer 18:31:48 c124ba1a8b26 Pulling fs layer 18:31:48 6394804c2196 Pulling fs layer 18:31:48 c124ba1a8b26 Waiting 18:31:48 6394804c2196 Waiting 18:31:48 1ec5fb03eaee Waiting 18:31:48 d3165a332ae3 Waiting 18:31:48 e5d7009d9e55 Waiting 18:31:48 da9db072f522 Pulling fs layer 18:31:48 96e38c8865ba Pulling fs layer 18:31:48 5e06c6bed798 Pulling fs layer 18:31:48 684be6598fc9 Pulling fs layer 18:31:48 0d92cad902ba Pulling fs layer 18:31:48 dcc0c3b2850c Pulling fs layer 18:31:48 eb7cda286a15 Pulling fs layer 18:31:48 5e06c6bed798 Waiting 18:31:48 684be6598fc9 Waiting 18:31:48 0d92cad902ba Waiting 18:31:48 eb7cda286a15 Waiting 18:31:48 dcc0c3b2850c Waiting 18:31:48 da9db072f522 Downloading [> ] 48.06kB/3.624MB 18:31:48 da9db072f522 Downloading [> ] 48.06kB/3.624MB 18:31:48 da9db072f522 Downloading [> ] 48.06kB/3.624MB 18:31:48 795b910b71c0 Downloading [> ] 31.67kB/2.323MB 18:31:48 2d429b9e73a6 Pulling fs layer 18:31:48 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 18:31:48 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 18:31:48 46eab5b44a35 Pulling fs layer 18:31:48 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 18:31:48 c4d302cc468d Pulling fs layer 18:31:48 01e0882c90d9 Pulling fs layer 18:31:48 531ee2cf3c0c Pulling fs layer 18:31:48 ed54a7dee1d8 Pulling fs layer 18:31:48 12c5c803443f Pulling fs layer 18:31:48 e27c75a98748 Pulling fs layer 18:31:48 e73cb4a42719 Pulling fs layer 18:31:48 a83b68436f09 Pulling fs layer 18:31:48 01e0882c90d9 Waiting 18:31:48 787d6bee9571 Pulling fs layer 18:31:48 531ee2cf3c0c Waiting 18:31:48 13ff0988aaea Pulling fs layer 18:31:48 ed54a7dee1d8 Waiting 18:31:48 4b82842ab819 Pulling fs layer 18:31:48 2d429b9e73a6 Waiting 18:31:48 7e568a0dc8fb Pulling fs layer 18:31:48 46eab5b44a35 Waiting 18:31:48 c4d302cc468d Waiting 18:31:48 12c5c803443f Waiting 18:31:48 e27c75a98748 Waiting 18:31:48 e73cb4a42719 Waiting 18:31:48 a83b68436f09 Waiting 18:31:48 13ff0988aaea Waiting 18:31:48 7e568a0dc8fb Waiting 18:31:48 4b82842ab819 Waiting 18:31:48 9fa9226be034 Pulling fs layer 18:31:48 1617e25568b2 Pulling fs layer 18:31:48 6ac0e4adf315 Pulling fs layer 18:31:48 f3b09c502777 Pulling fs layer 18:31:48 408012a7b118 Pulling fs layer 18:31:48 44986281b8b9 Pulling fs layer 18:31:48 bf70c5107ab5 Pulling fs layer 18:31:48 1ccde423731d Pulling fs layer 18:31:48 6ac0e4adf315 Waiting 18:31:48 7221d93db8a9 Pulling fs layer 18:31:48 7df673c7455d Pulling fs layer 18:31:48 f3b09c502777 Waiting 18:31:48 9fa9226be034 Waiting 18:31:48 1617e25568b2 Waiting 18:31:48 408012a7b118 Waiting 18:31:48 44986281b8b9 Waiting 18:31:48 bf70c5107ab5 Waiting 18:31:48 1ccde423731d Waiting 18:31:48 7221d93db8a9 Waiting 18:31:48 eca0188f477e Pulling fs layer 18:31:48 e444bcd4d577 Pulling fs layer 18:31:48 eabd8714fec9 Pulling fs layer 18:31:48 45fd2fec8a19 Pulling fs layer 18:31:48 8f10199ed94b Pulling fs layer 18:31:48 f963a77d2726 Pulling fs layer 18:31:48 f3a82e9f1761 Pulling fs layer 18:31:48 79161a3f5362 Pulling fs layer 18:31:48 eabd8714fec9 Waiting 18:31:48 9c266ba63f51 Pulling fs layer 18:31:48 2e8a7df9c2ee Pulling fs layer 18:31:48 10f05dd8b1db Pulling fs layer 18:31:48 45fd2fec8a19 Waiting 18:31:48 41dac8b43ba6 Pulling fs layer 18:31:48 f963a77d2726 Waiting 18:31:48 71a9f6a9ab4d Pulling fs layer 18:31:48 8f10199ed94b Waiting 18:31:48 f3a82e9f1761 Waiting 18:31:48 da3ed5db7103 Pulling fs layer 18:31:48 79161a3f5362 Waiting 18:31:48 c955f6e31a04 Pulling fs layer 18:31:48 9c266ba63f51 Waiting 18:31:48 41dac8b43ba6 Waiting 18:31:48 2e8a7df9c2ee Waiting 18:31:48 71a9f6a9ab4d Waiting 18:31:48 10f05dd8b1db Waiting 18:31:48 da3ed5db7103 Waiting 18:31:48 c955f6e31a04 Waiting 18:31:48 eca0188f477e Waiting 18:31:48 e444bcd4d577 Waiting 18:31:48 795b910b71c0 Downloading [==================================================>] 2.323MB/2.323MB 18:31:48 795b910b71c0 Download complete 18:31:48 da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB 18:31:48 da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB 18:31:48 da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB 18:31:48 da9db072f522 Verifying Checksum 18:31:48 da9db072f522 Verifying Checksum 18:31:48 da9db072f522 Verifying Checksum 18:31:48 da9db072f522 Download complete 18:31:48 da9db072f522 Download complete 18:31:48 da9db072f522 Download complete 18:31:48 da9db072f522 Extracting [> ] 65.54kB/3.624MB 18:31:48 da9db072f522 Extracting [> ] 65.54kB/3.624MB 18:31:48 da9db072f522 Extracting [> ] 65.54kB/3.624MB 18:31:48 0444d3911dbb Downloading [==================================================>] 1.2kB/1.2kB 18:31:48 0444d3911dbb Verifying Checksum 18:31:48 0444d3911dbb Download complete 18:31:48 d1bdb495a7aa Downloading [> ] 539.6kB/58.78MB 18:31:48 b801adf990e2 Downloading [==================================================>] 1.17kB/1.17kB 18:31:48 b801adf990e2 Verifying Checksum 18:31:48 b801adf990e2 Download complete 18:31:48 e5d7009d9e55 Downloading [==================================================>] 295B/295B 18:31:48 e5d7009d9e55 Verifying Checksum 18:31:48 e5d7009d9e55 Download complete 18:31:48 96e38c8865ba Downloading [=========> ] 12.98MB/71.91MB 18:31:48 96e38c8865ba Downloading [=========> ] 12.98MB/71.91MB 18:31:48 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 18:31:48 96e38c8865ba Downloading [=========> ] 12.98MB/71.91MB 18:31:48 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 18:31:48 1ec5fb03eaee Verifying Checksum 18:31:48 1ec5fb03eaee Download complete 18:31:48 d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB 18:31:48 d3165a332ae3 Download complete 18:31:48 da9db072f522 Extracting [==========> ] 786.4kB/3.624MB 18:31:48 da9db072f522 Extracting [==========> ] 786.4kB/3.624MB 18:31:48 da9db072f522 Extracting [==========> ] 786.4kB/3.624MB 18:31:48 d1bdb495a7aa Downloading [=======> ] 9.19MB/58.78MB 18:31:48 c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 18:31:48 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 18:31:48 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 18:31:48 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 18:31:48 96e38c8865ba Downloading [===================> ] 28.65MB/71.91MB 18:31:48 96e38c8865ba Downloading [===================> ] 28.65MB/71.91MB 18:31:48 96e38c8865ba Downloading [===================> ] 28.65MB/71.91MB 18:31:48 da9db072f522 Pull complete 18:31:48 da9db072f522 Pull complete 18:31:48 da9db072f522 Pull complete 18:31:48 d1bdb495a7aa Downloading [=============> ] 16.22MB/58.78MB 18:31:48 c124ba1a8b26 Downloading [=> ] 3.243MB/91.87MB 18:31:48 96e38c8865ba Downloading [==============================> ] 44.33MB/71.91MB 18:31:48 96e38c8865ba Downloading [==============================> ] 44.33MB/71.91MB 18:31:48 96e38c8865ba Downloading [==============================> ] 44.33MB/71.91MB 18:31:48 f18232174bc9 Pulling fs layer 18:31:48 e60d9caeb0b8 Pulling fs layer 18:31:48 f61a19743345 Pulling fs layer 18:31:48 8af57d8c9f49 Pulling fs layer 18:31:48 c53a11b7c6fc Pulling fs layer 18:31:48 e032d0a5e409 Pulling fs layer 18:31:48 c49e0ee60bfb Pulling fs layer 18:31:48 384497dbce3b Pulling fs layer 18:31:48 055b9255fa03 Pulling fs layer 18:31:48 b176d7edde70 Pulling fs layer 18:31:48 f61a19743345 Waiting 18:31:48 8af57d8c9f49 Waiting 18:31:48 384497dbce3b Waiting 18:31:48 c49e0ee60bfb Waiting 18:31:48 055b9255fa03 Waiting 18:31:48 e032d0a5e409 Waiting 18:31:48 b176d7edde70 Waiting 18:31:48 f18232174bc9 Waiting 18:31:48 e60d9caeb0b8 Waiting 18:31:48 c53a11b7c6fc Waiting 18:31:48 1e017ebebdbd Pulling fs layer 18:31:48 55f2b468da67 Pulling fs layer 18:31:48 82bfc142787e Pulling fs layer 18:31:48 46baca71a4ef Pulling fs layer 18:31:48 b0e0ef7895f4 Pulling fs layer 18:31:48 c0c90eeb8aca Pulling fs layer 18:31:48 5cfb27c10ea5 Pulling fs layer 18:31:48 40a5eed61bb0 Pulling fs layer 18:31:48 e040ea11fa10 Pulling fs layer 18:31:48 09d5a3f70313 Pulling fs layer 18:31:48 356f5c2c843b Pulling fs layer 18:31:48 c0c90eeb8aca Waiting 18:31:48 5cfb27c10ea5 Waiting 18:31:48 40a5eed61bb0 Waiting 18:31:48 1e017ebebdbd Waiting 18:31:48 46baca71a4ef Waiting 18:31:48 55f2b468da67 Waiting 18:31:48 82bfc142787e Waiting 18:31:48 b0e0ef7895f4 Waiting 18:31:48 09d5a3f70313 Waiting 18:31:48 356f5c2c843b Waiting 18:31:48 d1bdb495a7aa Downloading [======================> ] 25.95MB/58.78MB 18:31:48 c124ba1a8b26 Downloading [=====> ] 9.19MB/91.87MB 18:31:48 96e38c8865ba Downloading [==========================================> ] 61.09MB/71.91MB 18:31:48 96e38c8865ba Downloading [==========================================> ] 61.09MB/71.91MB 18:31:48 96e38c8865ba Downloading [==========================================> ] 61.09MB/71.91MB 18:31:48 d1bdb495a7aa Downloading [===================================> ] 41.63MB/58.78MB 18:31:48 96e38c8865ba Verifying Checksum 18:31:48 96e38c8865ba Download complete 18:31:48 96e38c8865ba Download complete 18:31:48 96e38c8865ba Download complete 18:31:48 c124ba1a8b26 Downloading [========> ] 15.14MB/91.87MB 18:31:48 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 18:31:48 6394804c2196 Verifying Checksum 18:31:48 6394804c2196 Download complete 18:31:48 5e06c6bed798 Download complete 18:31:48 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 18:31:48 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 18:31:48 684be6598fc9 Verifying Checksum 18:31:48 684be6598fc9 Download complete 18:31:48 d1bdb495a7aa Downloading [===============================================> ] 55.69MB/58.78MB 18:31:48 0d92cad902ba Download complete 18:31:48 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 18:31:48 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 18:31:48 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 18:31:48 da9db072f522 Already exists 18:31:48 56aca8a42329 Pulling fs layer 18:31:48 fbe227156a9a Pulling fs layer 18:31:48 b56567b07821 Pulling fs layer 18:31:48 f243361b999b Pulling fs layer 18:31:48 7abf0dc59d35 Pulling fs layer 18:31:48 991de477d40a Pulling fs layer 18:31:48 5efc16ba9cdc Pulling fs layer 18:31:48 b56567b07821 Waiting 18:31:48 56aca8a42329 Waiting 18:31:48 f243361b999b Waiting 18:31:48 fbe227156a9a Waiting 18:31:48 5efc16ba9cdc Waiting 18:31:48 d1bdb495a7aa Verifying Checksum 18:31:48 d1bdb495a7aa Download complete 18:31:48 c124ba1a8b26 Downloading [==============> ] 26.49MB/91.87MB 18:31:48 dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 18:31:48 eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB 18:31:48 eb7cda286a15 Verifying Checksum 18:31:48 eb7cda286a15 Download complete 18:31:48 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 18:31:48 96e38c8865ba Extracting [====> ] 6.128MB/71.91MB 18:31:48 96e38c8865ba Extracting [====> ] 6.128MB/71.91MB 18:31:48 96e38c8865ba Extracting [====> ] 6.128MB/71.91MB 18:31:48 c124ba1a8b26 Downloading [========================> ] 44.87MB/91.87MB 18:31:48 dcc0c3b2850c Downloading [====> ] 6.487MB/76.12MB 18:31:48 2d429b9e73a6 Downloading [=======> ] 4.128MB/29.13MB 18:31:48 96e38c8865ba Extracting [========> ] 11.7MB/71.91MB 18:31:48 96e38c8865ba Extracting [========> ] 11.7MB/71.91MB 18:31:48 96e38c8865ba Extracting [========> ] 11.7MB/71.91MB 18:31:48 c124ba1a8b26 Downloading [================================> ] 60.55MB/91.87MB 18:31:48 dcc0c3b2850c Downloading [========> ] 12.98MB/76.12MB 18:31:48 2d429b9e73a6 Downloading [====================> ] 12.09MB/29.13MB 18:31:48 96e38c8865ba Extracting [============> ] 18.38MB/71.91MB 18:31:48 96e38c8865ba Extracting [============> ] 18.38MB/71.91MB 18:31:48 96e38c8865ba Extracting [============> ] 18.38MB/71.91MB 18:31:48 c124ba1a8b26 Downloading [==========================================> ] 77.86MB/91.87MB 18:31:48 dcc0c3b2850c Downloading [================> ] 25.41MB/76.12MB 18:31:48 2d429b9e73a6 Downloading [=========================================> ] 23.89MB/29.13MB 18:31:48 c124ba1a8b26 Verifying Checksum 18:31:48 c124ba1a8b26 Download complete 18:31:49 46eab5b44a35 Download complete 18:31:49 96e38c8865ba Extracting [================> ] 23.95MB/71.91MB 18:31:49 96e38c8865ba Extracting [================> ] 23.95MB/71.91MB 18:31:49 96e38c8865ba Extracting [================> ] 23.95MB/71.91MB 18:31:49 2d429b9e73a6 Verifying Checksum 18:31:49 2d429b9e73a6 Download complete 18:31:49 dcc0c3b2850c Downloading [==========================> ] 40.01MB/76.12MB 18:31:49 c4d302cc468d Downloading [> ] 48.06kB/4.534MB 18:31:49 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 18:31:49 01e0882c90d9 Verifying Checksum 18:31:49 01e0882c90d9 Download complete 18:31:49 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 18:31:49 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 18:31:49 c4d302cc468d Verifying Checksum 18:31:49 c4d302cc468d Download complete 18:31:49 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 18:31:49 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 18:31:49 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 18:31:49 dcc0c3b2850c Downloading [=======================================> ] 60.01MB/76.12MB 18:31:49 ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 18:31:49 ed54a7dee1d8 Verifying Checksum 18:31:49 ed54a7dee1d8 Download complete 18:31:49 12c5c803443f Downloading [==================================================>] 116B/116B 18:31:49 12c5c803443f Verifying Checksum 18:31:49 12c5c803443f Download complete 18:31:49 e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB 18:31:49 e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB 18:31:49 e27c75a98748 Verifying Checksum 18:31:49 e27c75a98748 Download complete 18:31:49 2d429b9e73a6 Extracting [========> ] 5.014MB/29.13MB 18:31:49 531ee2cf3c0c Downloading [========================================> ] 6.471MB/8.066MB 18:31:49 dcc0c3b2850c Downloading [=================================================> ] 75.15MB/76.12MB 18:31:49 e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 18:31:49 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 18:31:49 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 18:31:49 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 18:31:49 dcc0c3b2850c Verifying Checksum 18:31:49 dcc0c3b2850c Download complete 18:31:49 531ee2cf3c0c Verifying Checksum 18:31:49 531ee2cf3c0c Download complete 18:31:49 a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB 18:31:49 a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB 18:31:49 a83b68436f09 Download complete 18:31:49 787d6bee9571 Downloading [==================================================>] 127B/127B 18:31:49 787d6bee9571 Verifying Checksum 18:31:49 787d6bee9571 Download complete 18:31:49 13ff0988aaea Downloading [==================================================>] 167B/167B 18:31:49 13ff0988aaea Verifying Checksum 18:31:49 13ff0988aaea Download complete 18:31:49 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 18:31:49 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 18:31:49 4b82842ab819 Verifying Checksum 18:31:49 4b82842ab819 Download complete 18:31:49 7e568a0dc8fb Downloading [==================================================>] 184B/184B 18:31:49 7e568a0dc8fb Verifying Checksum 18:31:49 7e568a0dc8fb Download complete 18:31:49 9fa9226be034 Downloading [> ] 15.3kB/783kB 18:31:49 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 18:31:49 9fa9226be034 Downloading [==================================================>] 783kB/783kB 18:31:49 9fa9226be034 Verifying Checksum 18:31:49 9fa9226be034 Download complete 18:31:49 9fa9226be034 Extracting [==> ] 32.77kB/783kB 18:31:49 1617e25568b2 Download complete 18:31:49 2d429b9e73a6 Extracting [================> ] 9.732MB/29.13MB 18:31:49 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 18:31:49 f3b09c502777 Downloading [> ] 539.6kB/56.52MB 18:31:49 e73cb4a42719 Downloading [======> ] 13.52MB/109.1MB 18:31:49 96e38c8865ba Extracting [=========================> ] 37.32MB/71.91MB 18:31:49 96e38c8865ba Extracting [=========================> ] 37.32MB/71.91MB 18:31:49 96e38c8865ba Extracting [=========================> ] 37.32MB/71.91MB 18:31:49 9fa9226be034 Extracting [==================================================>] 783kB/783kB 18:31:49 9fa9226be034 Extracting [==================================================>] 783kB/783kB 18:31:49 2d429b9e73a6 Extracting [======================> ] 12.98MB/29.13MB 18:31:49 6ac0e4adf315 Downloading [===> ] 4.865MB/62.07MB 18:31:49 e73cb4a42719 Downloading [=============> ] 29.74MB/109.1MB 18:31:49 f3b09c502777 Downloading [====> ] 4.865MB/56.52MB 18:31:49 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 18:31:49 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 18:31:49 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 18:31:49 9fa9226be034 Pull complete 18:31:49 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 18:31:49 2d429b9e73a6 Extracting [=============================> ] 17.1MB/29.13MB 18:31:49 e73cb4a42719 Downloading [=====================> ] 46.5MB/109.1MB 18:31:49 6ac0e4adf315 Downloading [========> ] 10.27MB/62.07MB 18:31:49 f3b09c502777 Downloading [=========> ] 10.27MB/56.52MB 18:31:49 96e38c8865ba Extracting [================================> ] 46.79MB/71.91MB 18:31:49 96e38c8865ba Extracting [================================> ] 46.79MB/71.91MB 18:31:49 96e38c8865ba Extracting [================================> ] 46.79MB/71.91MB 18:31:49 1617e25568b2 Extracting [========================================> ] 393.2kB/480.9kB 18:31:49 2d429b9e73a6 Extracting [======================================> ] 22.71MB/29.13MB 18:31:49 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 18:31:49 e73cb4a42719 Downloading [=============================> ] 63.26MB/109.1MB 18:31:49 6ac0e4adf315 Downloading [============> ] 15.14MB/62.07MB 18:31:49 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 18:31:49 f3b09c502777 Downloading [=============> ] 15.68MB/56.52MB 18:31:49 96e38c8865ba Extracting [==================================> ] 49.58MB/71.91MB 18:31:49 96e38c8865ba Extracting [==================================> ] 49.58MB/71.91MB 18:31:49 96e38c8865ba Extracting [==================================> ] 49.58MB/71.91MB 18:31:49 6ac0e4adf315 Downloading [===================> ] 24.33MB/62.07MB 18:31:49 f3b09c502777 Downloading [=======================> ] 26.49MB/56.52MB 18:31:49 e73cb4a42719 Downloading [==================================> ] 75.69MB/109.1MB 18:31:49 1617e25568b2 Pull complete 18:31:49 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 18:31:49 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 18:31:49 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 18:31:49 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 18:31:49 6ac0e4adf315 Downloading [================================> ] 40.01MB/62.07MB 18:31:49 f3b09c502777 Downloading [=================================> ] 37.85MB/56.52MB 18:31:49 e73cb4a42719 Downloading [===========================================> ] 95.16MB/109.1MB 18:31:49 2d429b9e73a6 Extracting [===============================================> ] 27.72MB/29.13MB 18:31:49 96e38c8865ba Extracting [=======================================> ] 56.26MB/71.91MB 18:31:49 96e38c8865ba Extracting [=======================================> ] 56.26MB/71.91MB 18:31:49 96e38c8865ba Extracting [=======================================> ] 56.26MB/71.91MB 18:31:49 6ac0e4adf315 Downloading [===========================================> ] 54.61MB/62.07MB 18:31:49 e73cb4a42719 Verifying Checksum 18:31:49 e73cb4a42719 Download complete 18:31:49 408012a7b118 Downloading [==================================================>] 637B/637B 18:31:49 408012a7b118 Verifying Checksum 18:31:49 408012a7b118 Download complete 18:31:49 f3b09c502777 Downloading [==========================================> ] 47.58MB/56.52MB 18:31:49 6ac0e4adf315 Verifying Checksum 18:31:49 6ac0e4adf315 Download complete 18:31:49 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 18:31:49 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 18:31:49 44986281b8b9 Verifying Checksum 18:31:49 44986281b8b9 Download complete 18:31:50 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 18:31:50 bf70c5107ab5 Download complete 18:31:50 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 18:31:50 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 18:31:50 1ccde423731d Download complete 18:31:50 7221d93db8a9 Downloading [==================================================>] 100B/100B 18:31:50 7221d93db8a9 Verifying Checksum 18:31:50 7221d93db8a9 Download complete 18:31:50 7df673c7455d Downloading [==================================================>] 694B/694B 18:31:50 7df673c7455d Verifying Checksum 18:31:50 7df673c7455d Download complete 18:31:50 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 18:31:50 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 18:31:50 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 18:31:50 f3b09c502777 Verifying Checksum 18:31:50 f3b09c502777 Download complete 18:31:50 e444bcd4d577 Downloading [==================================================>] 279B/279B 18:31:50 e444bcd4d577 Verifying Checksum 18:31:50 e444bcd4d577 Download complete 18:31:50 eca0188f477e Downloading [> ] 375.7kB/37.17MB 18:31:50 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 18:31:50 45fd2fec8a19 Verifying Checksum 18:31:50 45fd2fec8a19 Download complete 18:31:50 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 18:31:50 eabd8714fec9 Downloading [> ] 539.6kB/375MB 18:31:50 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 18:31:50 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 18:31:50 96e38c8865ba Extracting [=============================================> ] 65.18MB/71.91MB 18:31:50 96e38c8865ba Extracting [=============================================> ] 65.18MB/71.91MB 18:31:50 96e38c8865ba Extracting [=============================================> ] 65.18MB/71.91MB 18:31:50 eca0188f477e Downloading [=================> ] 12.81MB/37.17MB 18:31:50 eabd8714fec9 Downloading [=> ] 10.27MB/375MB 18:31:50 8f10199ed94b Downloading [===========================================> ] 7.667MB/8.768MB 18:31:50 2d429b9e73a6 Pull complete 18:31:50 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 18:31:50 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 18:31:50 8f10199ed94b Verifying Checksum 18:31:50 8f10199ed94b Download complete 18:31:50 6ac0e4adf315 Extracting [===> ] 4.456MB/62.07MB 18:31:50 f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB 18:31:50 f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB 18:31:50 f963a77d2726 Verifying Checksum 18:31:50 f963a77d2726 Download complete 18:31:50 f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 18:31:50 96e38c8865ba Extracting [================================================> ] 70.19MB/71.91MB 18:31:50 96e38c8865ba Extracting [================================================> ] 70.19MB/71.91MB 18:31:50 96e38c8865ba Extracting [================================================> ] 70.19MB/71.91MB 18:31:50 eca0188f477e Downloading [=======================================> ] 29.39MB/37.17MB 18:31:50 eabd8714fec9 Downloading [===> ] 23.79MB/375MB 18:31:50 6ac0e4adf315 Extracting [======> ] 7.799MB/62.07MB 18:31:50 eca0188f477e Verifying Checksum 18:31:50 eca0188f477e Download complete 18:31:50 46eab5b44a35 Pull complete 18:31:50 c4d302cc468d Extracting [> ] 65.54kB/4.534MB 18:31:50 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 18:31:50 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 18:31:50 79161a3f5362 Verifying Checksum 18:31:50 79161a3f5362 Download complete 18:31:50 f3a82e9f1761 Downloading [====> ] 3.669MB/44.41MB 18:31:50 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 18:31:50 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 18:31:50 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 18:31:50 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 18:31:50 9c266ba63f51 Download complete 18:31:50 eca0188f477e Extracting [> ] 393.2kB/37.17MB 18:31:50 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 18:31:50 2e8a7df9c2ee Download complete 18:31:50 eabd8714fec9 Downloading [=====> ] 38.93MB/375MB 18:31:50 10f05dd8b1db Downloading [==================================================>] 98B/98B 18:31:50 10f05dd8b1db Download complete 18:31:50 41dac8b43ba6 Downloading [==================================================>] 171B/171B 18:31:50 41dac8b43ba6 Verifying Checksum 18:31:50 41dac8b43ba6 Download complete 18:31:50 c4d302cc468d Extracting [===============> ] 1.442MB/4.534MB 18:31:50 96e38c8865ba Pull complete 18:31:50 96e38c8865ba Pull complete 18:31:50 96e38c8865ba Pull complete 18:31:50 6ac0e4adf315 Extracting [=======> ] 9.47MB/62.07MB 18:31:50 5e06c6bed798 Extracting [==================================================>] 296B/296B 18:31:50 795b910b71c0 Extracting [> ] 32.77kB/2.323MB 18:31:50 e5d7009d9e55 Extracting [==================================================>] 295B/295B 18:31:50 5e06c6bed798 Extracting [==================================================>] 296B/296B 18:31:50 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 18:31:50 f3a82e9f1761 Downloading [===========> ] 10.55MB/44.41MB 18:31:50 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 18:31:50 71a9f6a9ab4d Verifying Checksum 18:31:50 71a9f6a9ab4d Download complete 18:31:50 eca0188f477e Extracting [=====> ] 4.325MB/37.17MB 18:31:50 eabd8714fec9 Downloading [======> ] 52.44MB/375MB 18:31:50 c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 18:31:50 6ac0e4adf315 Extracting [=========> ] 11.7MB/62.07MB 18:31:50 da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 18:31:50 f3a82e9f1761 Downloading [==================> ] 16.51MB/44.41MB 18:31:50 795b910b71c0 Extracting [=========> ] 458.8kB/2.323MB 18:31:50 eabd8714fec9 Downloading [========> ] 64.34MB/375MB 18:31:50 5e06c6bed798 Pull complete 18:31:50 e5d7009d9e55 Pull complete 18:31:50 c4d302cc468d Pull complete 18:31:50 795b910b71c0 Extracting [==================================================>] 2.323MB/2.323MB 18:31:50 795b910b71c0 Extracting [==================================================>] 2.323MB/2.323MB 18:31:50 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 18:31:50 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 18:31:50 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 18:31:50 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 18:31:50 6ac0e4adf315 Extracting [===========> ] 14.48MB/62.07MB 18:31:50 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 18:31:50 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 18:31:50 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 18:31:50 eca0188f477e Extracting [========> ] 6.685MB/37.17MB 18:31:50 795b910b71c0 Pull complete 18:31:50 f3a82e9f1761 Downloading [=========================> ] 22.48MB/44.41MB 18:31:50 da3ed5db7103 Downloading [> ] 2.162MB/127.4MB 18:31:50 eabd8714fec9 Downloading [==========> ] 81.64MB/375MB 18:31:50 6ac0e4adf315 Extracting [=============> ] 16.71MB/62.07MB 18:31:50 eca0188f477e Extracting [==============> ] 10.62MB/37.17MB 18:31:50 1ec5fb03eaee Pull complete 18:31:50 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 18:31:50 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 18:31:50 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 18:31:50 684be6598fc9 Pull complete 18:31:50 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 18:31:50 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 18:31:50 f3a82e9f1761 Downloading [================================> ] 28.9MB/44.41MB 18:31:50 d1bdb495a7aa Extracting [> ] 557.1kB/58.78MB 18:31:50 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 18:31:50 eabd8714fec9 Downloading [=============> ] 97.86MB/375MB 18:31:50 01e0882c90d9 Pull complete 18:31:50 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 18:31:50 da3ed5db7103 Downloading [=> ] 3.784MB/127.4MB 18:31:50 6ac0e4adf315 Extracting [================> ] 20.05MB/62.07MB 18:31:50 eca0188f477e Extracting [===================> ] 14.55MB/37.17MB 18:31:50 f3a82e9f1761 Downloading [=======================================> ] 34.86MB/44.41MB 18:31:50 d1bdb495a7aa Extracting [======> ] 7.242MB/58.78MB 18:31:50 0d92cad902ba Pull complete 18:31:50 d3165a332ae3 Pull complete 18:31:50 eabd8714fec9 Downloading [===============> ] 113MB/375MB 18:31:50 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 18:31:50 da3ed5db7103 Downloading [==> ] 5.406MB/127.4MB 18:31:50 eca0188f477e Extracting [========================> ] 18.09MB/37.17MB 18:31:50 f3a82e9f1761 Downloading [==============================================> ] 41.29MB/44.41MB 18:31:50 d1bdb495a7aa Extracting [==============> ] 16.71MB/58.78MB 18:31:50 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB 18:31:50 dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 18:31:51 f3a82e9f1761 Verifying Checksum 18:31:51 f3a82e9f1761 Download complete 18:31:51 eabd8714fec9 Downloading [=================> ] 127.6MB/375MB 18:31:51 531ee2cf3c0c Extracting [=======================> ] 3.834MB/8.066MB 18:31:51 c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 18:31:51 c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB 18:31:51 c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB 18:31:51 c955f6e31a04 Verifying Checksum 18:31:51 c955f6e31a04 Download complete 18:31:51 da3ed5db7103 Downloading [==> ] 7.568MB/127.4MB 18:31:51 eca0188f477e Extracting [============================> ] 21.23MB/37.17MB 18:31:51 f18232174bc9 Downloading [> ] 48.06kB/3.642MB 18:31:51 d1bdb495a7aa Extracting [==================> ] 22.28MB/58.78MB 18:31:51 6ac0e4adf315 Extracting [=====================> ] 26.74MB/62.07MB 18:31:51 dcc0c3b2850c Extracting [======> ] 9.47MB/76.12MB 18:31:51 eabd8714fec9 Downloading [==================> ] 135.2MB/375MB 18:31:51 531ee2cf3c0c Extracting [==============================> ] 4.915MB/8.066MB 18:31:51 c124ba1a8b26 Extracting [====> ] 8.356MB/91.87MB 18:31:51 da3ed5db7103 Downloading [===> ] 9.731MB/127.4MB 18:31:51 f18232174bc9 Downloading [================================================> ] 3.538MB/3.642MB 18:31:51 eca0188f477e Extracting [=================================> ] 24.77MB/37.17MB 18:31:51 f18232174bc9 Verifying Checksum 18:31:51 f18232174bc9 Download complete 18:31:51 f18232174bc9 Extracting [> ] 65.54kB/3.642MB 18:31:51 d1bdb495a7aa Extracting [===========================> ] 32.31MB/58.78MB 18:31:51 dcc0c3b2850c Extracting [==========> ] 16.15MB/76.12MB 18:31:51 e60d9caeb0b8 Downloading [==================================================>] 140B/140B 18:31:51 e60d9caeb0b8 Verifying Checksum 18:31:51 e60d9caeb0b8 Download complete 18:31:51 eabd8714fec9 Downloading [===================> ] 146.5MB/375MB 18:31:51 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB 18:31:51 531ee2cf3c0c Extracting [==========================================> ] 6.881MB/8.066MB 18:31:51 c124ba1a8b26 Extracting [=======> ] 13.93MB/91.87MB 18:31:51 da3ed5db7103 Downloading [====> ] 12.43MB/127.4MB 18:31:51 f61a19743345 Downloading [> ] 48.06kB/3.524MB 18:31:51 f18232174bc9 Extracting [========> ] 655.4kB/3.642MB 18:31:51 eca0188f477e Extracting [=====================================> ] 27.92MB/37.17MB 18:31:51 d1bdb495a7aa Extracting [===================================> ] 41.22MB/58.78MB 18:31:51 dcc0c3b2850c Extracting [===============> ] 22.84MB/76.12MB 18:31:51 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 18:31:51 eabd8714fec9 Downloading [====================> ] 156.8MB/375MB 18:31:51 6ac0e4adf315 Extracting [============================> ] 35.65MB/62.07MB 18:31:51 c124ba1a8b26 Extracting [==========> ] 20.05MB/91.87MB 18:31:51 f61a19743345 Downloading [================> ] 1.129MB/3.524MB 18:31:51 da3ed5db7103 Downloading [=====> ] 15.14MB/127.4MB 18:31:51 f18232174bc9 Extracting [================================================> ] 3.539MB/3.642MB 18:31:51 531ee2cf3c0c Pull complete 18:31:51 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 18:31:51 ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 18:31:51 eca0188f477e Extracting [========================================> ] 30.28MB/37.17MB 18:31:51 dcc0c3b2850c Extracting [================> ] 25.07MB/76.12MB 18:31:51 d1bdb495a7aa Extracting [=======================================> ] 46.24MB/58.78MB 18:31:51 eabd8714fec9 Downloading [======================> ] 168.1MB/375MB 18:31:51 6ac0e4adf315 Extracting [=====================================> ] 46.24MB/62.07MB 18:31:51 f61a19743345 Downloading [==================================================>] 3.524MB/3.524MB 18:31:51 f61a19743345 Verifying Checksum 18:31:51 f61a19743345 Download complete 18:31:51 c124ba1a8b26 Extracting [===============> ] 27.85MB/91.87MB 18:31:51 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB 18:31:51 f18232174bc9 Pull complete 18:31:51 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 18:31:51 da3ed5db7103 Downloading [========> ] 21.09MB/127.4MB 18:31:51 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 18:31:51 dcc0c3b2850c Extracting [====================> ] 30.64MB/76.12MB 18:31:51 eca0188f477e Extracting [============================================> ] 33.03MB/37.17MB 18:31:51 d1bdb495a7aa Extracting [=============================================> ] 54.03MB/58.78MB 18:31:51 ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 18:31:51 eabd8714fec9 Downloading [========================> ] 182.7MB/375MB 18:31:51 c124ba1a8b26 Extracting [==================> ] 33.42MB/91.87MB 18:31:51 6ac0e4adf315 Extracting [============================================> ] 55.71MB/62.07MB 18:31:51 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 18:31:51 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 18:31:51 d1bdb495a7aa Extracting [==================================================>] 58.78MB/58.78MB 18:31:51 da3ed5db7103 Downloading [============> ] 32.98MB/127.4MB 18:31:51 8af57d8c9f49 Downloading [===================> ] 3.44MB/8.735MB 18:31:51 dcc0c3b2850c Extracting [========================> ] 37.32MB/76.12MB 18:31:51 eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB 18:31:51 eabd8714fec9 Downloading [==========================> ] 197.9MB/375MB 18:31:51 6ac0e4adf315 Extracting [================================================> ] 60.72MB/62.07MB 18:31:51 c124ba1a8b26 Extracting [=====================> ] 39.55MB/91.87MB 18:31:51 8af57d8c9f49 Downloading [=====================================> ] 6.487MB/8.735MB 18:31:51 da3ed5db7103 Downloading [==================> ] 47.58MB/127.4MB 18:31:51 dcc0c3b2850c Extracting [==============================> ] 45.68MB/76.12MB 18:31:51 eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB 18:31:51 8af57d8c9f49 Verifying Checksum 18:31:51 8af57d8c9f49 Download complete 18:31:51 da3ed5db7103 Downloading [=====================> ] 54.61MB/127.4MB 18:31:51 eabd8714fec9 Downloading [============================> ] 213MB/375MB 18:31:51 c124ba1a8b26 Extracting [=========================> ] 47.35MB/91.87MB 18:31:51 eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 18:31:51 c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB 18:31:51 c53a11b7c6fc Downloading [==================================================>] 58.08kB/58.08kB 18:31:51 c53a11b7c6fc Verifying Checksum 18:31:51 c53a11b7c6fc Download complete 18:31:51 dcc0c3b2850c Extracting [==================================> ] 52.92MB/76.12MB 18:31:51 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 18:31:51 e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB 18:31:51 e032d0a5e409 Downloading [==================================================>] 27.77kB/27.77kB 18:31:51 e032d0a5e409 Download complete 18:31:51 c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB 18:31:51 da3ed5db7103 Downloading [===========================> ] 70.83MB/127.4MB 18:31:51 c124ba1a8b26 Extracting [==============================> ] 56.26MB/91.87MB 18:31:51 eabd8714fec9 Downloading [==============================> ] 229.8MB/375MB 18:31:51 dcc0c3b2850c Extracting [======================================> ] 58.49MB/76.12MB 18:31:51 c49e0ee60bfb Downloading [====> ] 10.27MB/107.3MB 18:31:51 da3ed5db7103 Downloading [=================================> ] 86.51MB/127.4MB 18:31:51 eabd8714fec9 Downloading [================================> ] 246MB/375MB 18:31:52 c124ba1a8b26 Extracting [====================================> ] 66.85MB/91.87MB 18:31:52 dcc0c3b2850c Extracting [==============================================> ] 70.75MB/76.12MB 18:31:52 c49e0ee60bfb Downloading [==========> ] 23.25MB/107.3MB 18:31:52 dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 18:31:52 da3ed5db7103 Downloading [========================================> ] 102.2MB/127.4MB 18:31:52 eabd8714fec9 Downloading [===================================> ] 263.3MB/375MB 18:31:52 c124ba1a8b26 Extracting [=========================================> ] 76.32MB/91.87MB 18:31:52 c49e0ee60bfb Downloading [==================> ] 40.55MB/107.3MB 18:31:52 da3ed5db7103 Downloading [==============================================> ] 118.9MB/127.4MB 18:31:52 eabd8714fec9 Downloading [=====================================> ] 280.1MB/375MB 18:31:52 c124ba1a8b26 Extracting [================================================> ] 89.69MB/91.87MB 18:31:52 c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 18:31:52 da3ed5db7103 Verifying Checksum 18:31:52 da3ed5db7103 Download complete 18:31:52 ed54a7dee1d8 Pull complete 18:31:52 d1bdb495a7aa Pull complete 18:31:52 e60d9caeb0b8 Pull complete 18:31:52 c49e0ee60bfb Downloading [========================> ] 53.53MB/107.3MB 18:31:52 eabd8714fec9 Downloading [=======================================> ] 295.2MB/375MB 18:31:52 384497dbce3b Downloading [> ] 539.6kB/63.48MB 18:31:52 eca0188f477e Pull complete 18:31:52 6ac0e4adf315 Pull complete 18:31:52 c49e0ee60bfb Downloading [================================> ] 69.75MB/107.3MB 18:31:52 eabd8714fec9 Downloading [=========================================> ] 308.2MB/375MB 18:31:52 384497dbce3b Downloading [========> ] 10.27MB/63.48MB 18:31:52 12c5c803443f Extracting [==================================================>] 116B/116B 18:31:52 12c5c803443f Extracting [==================================================>] 116B/116B 18:31:52 dcc0c3b2850c Pull complete 18:31:52 c124ba1a8b26 Pull complete 18:31:52 f61a19743345 Extracting [> ] 65.54kB/3.524MB 18:31:52 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 18:31:52 0444d3911dbb Extracting [==================================================>] 1.2kB/1.2kB 18:31:52 e444bcd4d577 Extracting [==================================================>] 279B/279B 18:31:52 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 18:31:52 c49e0ee60bfb Downloading [========================================> ] 86.51MB/107.3MB 18:31:52 e444bcd4d577 Extracting [==================================================>] 279B/279B 18:31:52 0444d3911dbb Extracting [==================================================>] 1.2kB/1.2kB 18:31:52 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 18:31:52 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 18:31:52 eabd8714fec9 Downloading [===========================================> ] 326MB/375MB 18:31:52 384497dbce3b Downloading [==================> ] 23.25MB/63.48MB 18:31:52 f3b09c502777 Extracting [> ] 557.1kB/56.52MB 18:31:52 12c5c803443f Pull complete 18:31:52 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 18:31:52 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 18:31:52 c49e0ee60bfb Downloading [===============================================> ] 102.7MB/107.3MB 18:31:52 f61a19743345 Extracting [==========> ] 720.9kB/3.524MB 18:31:52 eabd8714fec9 Downloading [=============================================> ] 339MB/375MB 18:31:52 c49e0ee60bfb Verifying Checksum 18:31:52 c49e0ee60bfb Download complete 18:31:52 384497dbce3b Downloading [===========================> ] 35.14MB/63.48MB 18:31:52 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 18:31:52 055b9255fa03 Download complete 18:31:52 e444bcd4d577 Pull complete 18:31:52 f3b09c502777 Extracting [==> ] 3.342MB/56.52MB 18:31:52 b176d7edde70 Downloading [==================================================>] 1.227kB/1.227kB 18:31:52 b176d7edde70 Verifying Checksum 18:31:52 b176d7edde70 Download complete 18:31:52 0444d3911dbb Pull complete 18:31:52 b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB 18:31:52 b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB 18:31:52 f61a19743345 Extracting [=========================================> ] 2.949MB/3.524MB 18:31:52 6394804c2196 Pull complete 18:31:52 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 18:31:52 eb7cda286a15 Pull complete 18:31:52 pap Pulled 18:31:52 e27c75a98748 Pull complete 18:31:52 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 18:31:52 eabd8714fec9 Downloading [==============================================> ] 348.7MB/375MB 18:31:52 384497dbce3b Downloading [===================================> ] 44.87MB/63.48MB 18:31:52 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 18:31:52 f3b09c502777 Extracting [======> ] 7.242MB/56.52MB 18:31:52 api Pulled 18:31:52 1e017ebebdbd Downloading [===========> ] 8.289MB/37.19MB 18:31:52 384497dbce3b Downloading [================================================> ] 62.18MB/63.48MB 18:31:52 eabd8714fec9 Downloading [================================================> ] 366.6MB/375MB 18:31:52 384497dbce3b Verifying Checksum 18:31:52 384497dbce3b Download complete 18:31:52 b801adf990e2 Pull complete 18:31:52 f61a19743345 Pull complete 18:31:52 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB 18:31:52 xacml-pdp Pulled 18:31:52 f3b09c502777 Extracting [=========> ] 10.58MB/56.52MB 18:31:52 e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 18:31:52 eabd8714fec9 Verifying Checksum 18:31:52 eabd8714fec9 Download complete 18:31:52 1e017ebebdbd Downloading [====================> ] 15.07MB/37.19MB 18:31:52 8af57d8c9f49 Extracting [==> ] 393.2kB/8.735MB 18:31:52 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 18:31:52 f3b09c502777 Extracting [=============> ] 15.04MB/56.52MB 18:31:52 82bfc142787e Downloading [> ] 97.22kB/8.613MB 18:31:52 e73cb4a42719 Extracting [==> ] 5.014MB/109.1MB 18:31:53 eabd8714fec9 Extracting [> ] 557.1kB/375MB 18:31:53 1e017ebebdbd Downloading [=========================================> ] 30.9MB/37.19MB 18:31:53 8af57d8c9f49 Extracting [=========================> ] 4.424MB/8.735MB 18:31:53 55f2b468da67 Downloading [=> ] 6.487MB/257.9MB 18:31:53 f3b09c502777 Extracting [================> ] 18.38MB/56.52MB 18:31:53 82bfc142787e Downloading [==============================> ] 5.307MB/8.613MB 18:31:53 82bfc142787e Verifying Checksum 18:31:53 82bfc142787e Download complete 18:31:53 1e017ebebdbd Verifying Checksum 18:31:53 1e017ebebdbd Download complete 18:31:53 eabd8714fec9 Extracting [=> ] 13.93MB/375MB 18:31:53 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 18:31:53 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 18:31:53 46baca71a4ef Verifying Checksum 18:31:53 46baca71a4ef Download complete 18:31:53 e73cb4a42719 Extracting [===> ] 7.799MB/109.1MB 18:31:53 b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 18:31:53 c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB 18:31:53 c0c90eeb8aca Verifying Checksum 18:31:53 c0c90eeb8aca Download complete 18:31:53 8af57d8c9f49 Extracting [=======================================> ] 6.881MB/8.735MB 18:31:53 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 18:31:53 5cfb27c10ea5 Verifying Checksum 18:31:53 5cfb27c10ea5 Download complete 18:31:53 55f2b468da67 Downloading [===> ] 17.3MB/257.9MB 18:31:53 f3b09c502777 Extracting [=================> ] 19.5MB/56.52MB 18:31:53 40a5eed61bb0 Download complete 18:31:53 e040ea11fa10 Downloading [==================================================>] 173B/173B 18:31:53 e040ea11fa10 Verifying Checksum 18:31:53 e040ea11fa10 Download complete 18:31:53 eabd8714fec9 Extracting [==> ] 20.61MB/375MB 18:31:53 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 18:31:53 b0e0ef7895f4 Downloading [=====> ] 3.767MB/37.01MB 18:31:53 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB 18:31:53 e73cb4a42719 Extracting [=====> ] 11.14MB/109.1MB 18:31:53 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 18:31:53 55f2b468da67 Downloading [=====> ] 27.57MB/257.9MB 18:31:53 f3b09c502777 Extracting [===================> ] 21.73MB/56.52MB 18:31:53 8af57d8c9f49 Pull complete 18:31:53 b0e0ef7895f4 Downloading [============> ] 9.043MB/37.01MB 18:31:53 1e017ebebdbd Extracting [===> ] 2.359MB/37.19MB 18:31:53 c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB 18:31:53 c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB 18:31:53 eabd8714fec9 Extracting [==> ] 22.28MB/375MB 18:31:53 e73cb4a42719 Extracting [======> ] 13.37MB/109.1MB 18:31:53 09d5a3f70313 Downloading [=> ] 3.243MB/109.2MB 18:31:53 55f2b468da67 Downloading [=======> ] 39.47MB/257.9MB 18:31:53 f3b09c502777 Extracting [=====================> ] 24.51MB/56.52MB 18:31:53 b0e0ef7895f4 Downloading [===================> ] 14.32MB/37.01MB 18:31:53 1e017ebebdbd Extracting [======> ] 5.112MB/37.19MB 18:31:53 eabd8714fec9 Extracting [===> ] 26.18MB/375MB 18:31:53 e73cb4a42719 Extracting [=======> ] 17.27MB/109.1MB 18:31:53 09d5a3f70313 Downloading [==> ] 5.946MB/109.2MB 18:31:53 55f2b468da67 Downloading [===========> ] 58.39MB/257.9MB 18:31:53 c53a11b7c6fc Pull complete 18:31:53 f3b09c502777 Extracting [=======================> ] 26.74MB/56.52MB 18:31:53 e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 18:31:53 e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 18:31:53 b0e0ef7895f4 Downloading [=======================> ] 17.71MB/37.01MB 18:31:53 1e017ebebdbd Extracting [========> ] 6.685MB/37.19MB 18:31:53 eabd8714fec9 Extracting [====> ] 32.31MB/375MB 18:31:53 e73cb4a42719 Extracting [========> ] 19.5MB/109.1MB 18:31:53 09d5a3f70313 Downloading [===> ] 8.65MB/109.2MB 18:31:53 55f2b468da67 Downloading [==============> ] 72.45MB/257.9MB 18:31:53 f3b09c502777 Extracting [===========================> ] 31.2MB/56.52MB 18:31:53 b0e0ef7895f4 Downloading [=================================> ] 24.49MB/37.01MB 18:31:53 1e017ebebdbd Extracting [=============> ] 10.22MB/37.19MB 18:31:53 eabd8714fec9 Extracting [=====> ] 42.34MB/375MB 18:31:53 e032d0a5e409 Pull complete 18:31:53 e73cb4a42719 Extracting [==========> ] 22.84MB/109.1MB 18:31:53 09d5a3f70313 Downloading [====> ] 10.81MB/109.2MB 18:31:53 55f2b468da67 Downloading [=================> ] 88.67MB/257.9MB 18:31:53 f3b09c502777 Extracting [====================================> ] 41.22MB/56.52MB 18:31:53 b0e0ef7895f4 Downloading [========================================> ] 30.15MB/37.01MB 18:31:53 1e017ebebdbd Extracting [==================> ] 13.76MB/37.19MB 18:31:53 eabd8714fec9 Extracting [======> ] 49.58MB/375MB 18:31:53 55f2b468da67 Downloading [====================> ] 103.8MB/257.9MB 18:31:53 09d5a3f70313 Downloading [======> ] 13.52MB/109.2MB 18:31:53 e73cb4a42719 Extracting [============> ] 26.18MB/109.1MB 18:31:53 f3b09c502777 Extracting [===============================================> ] 53.48MB/56.52MB 18:31:53 b0e0ef7895f4 Downloading [================================================> ] 36.17MB/37.01MB 18:31:53 c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB 18:31:53 b0e0ef7895f4 Verifying Checksum 18:31:53 b0e0ef7895f4 Download complete 18:31:53 eabd8714fec9 Extracting [=======> ] 56.82MB/375MB 18:31:53 1e017ebebdbd Extracting [========================> ] 18.09MB/37.19MB 18:31:53 55f2b468da67 Downloading [======================> ] 116.8MB/257.9MB 18:31:53 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 18:31:53 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 18:31:53 356f5c2c843b Verifying Checksum 18:31:53 356f5c2c843b Download complete 18:31:53 09d5a3f70313 Downloading [=======> ] 16.22MB/109.2MB 18:31:53 e73cb4a42719 Extracting [=============> ] 29.52MB/109.1MB 18:31:53 f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB 18:31:54 eabd8714fec9 Extracting [========> ] 64.62MB/375MB 18:31:54 c49e0ee60bfb Extracting [==> ] 4.456MB/107.3MB 18:31:54 f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 18:31:54 1e017ebebdbd Extracting [==============================> ] 22.81MB/37.19MB 18:31:54 55f2b468da67 Downloading [=========================> ] 129.8MB/257.9MB 18:31:54 09d5a3f70313 Downloading [===========> ] 25.41MB/109.2MB 18:31:54 f3b09c502777 Pull complete 18:31:54 408012a7b118 Extracting [==================================================>] 637B/637B 18:31:54 408012a7b118 Extracting [==================================================>] 637B/637B 18:31:54 e73cb4a42719 Extracting [===============> ] 32.87MB/109.1MB 18:31:54 eabd8714fec9 Extracting [=========> ] 73.53MB/375MB 18:31:54 c49e0ee60bfb Extracting [===> ] 8.356MB/107.3MB 18:31:54 1e017ebebdbd Extracting [=====================================> ] 27.53MB/37.19MB 18:31:54 55f2b468da67 Downloading [===========================> ] 143.8MB/257.9MB 18:31:54 09d5a3f70313 Downloading [================> ] 36.76MB/109.2MB 18:31:54 e73cb4a42719 Extracting [================> ] 35.09MB/109.1MB 18:31:54 408012a7b118 Pull complete 18:31:54 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 18:31:54 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 18:31:54 56aca8a42329 Downloading [> ] 539.6kB/71.91MB 18:31:54 eabd8714fec9 Extracting [===========> ] 85.79MB/375MB 18:31:54 1e017ebebdbd Extracting [==========================================> ] 31.85MB/37.19MB 18:31:54 c49e0ee60bfb Extracting [=====> ] 12.26MB/107.3MB 18:31:54 55f2b468da67 Downloading [===============================> ] 160MB/257.9MB 18:31:54 09d5a3f70313 Downloading [=======================> ] 51.9MB/109.2MB 18:31:54 e73cb4a42719 Extracting [=================> ] 38.44MB/109.1MB 18:31:54 eabd8714fec9 Extracting [============> ] 91.91MB/375MB 18:31:54 56aca8a42329 Downloading [====> ] 5.946MB/71.91MB 18:31:54 55f2b468da67 Downloading [=================================> ] 172.5MB/257.9MB 18:31:54 1e017ebebdbd Extracting [=============================================> ] 34.21MB/37.19MB 18:31:54 c49e0ee60bfb Extracting [=======> ] 15.04MB/107.3MB 18:31:54 09d5a3f70313 Downloading [=============================> ] 65.42MB/109.2MB 18:31:54 44986281b8b9 Pull complete 18:31:54 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 18:31:54 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 18:31:54 e73cb4a42719 Extracting [===================> ] 41.78MB/109.1MB 18:31:54 56aca8a42329 Downloading [=========> ] 13.52MB/71.91MB 18:31:54 eabd8714fec9 Extracting [=============> ] 99.16MB/375MB 18:31:54 55f2b468da67 Downloading [====================================> ] 186MB/257.9MB 18:31:54 1e017ebebdbd Extracting [================================================> ] 36.18MB/37.19MB 18:31:54 09d5a3f70313 Downloading [====================================> ] 80.02MB/109.2MB 18:31:54 c49e0ee60bfb Extracting [=======> ] 16.71MB/107.3MB 18:31:54 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 18:31:54 e73cb4a42719 Extracting [====================> ] 44.56MB/109.1MB 18:31:54 56aca8a42329 Downloading [===============> ] 21.63MB/71.91MB 18:31:54 55f2b468da67 Downloading [======================================> ] 197.9MB/257.9MB 18:31:54 bf70c5107ab5 Pull complete 18:31:54 09d5a3f70313 Downloading [==========================================> ] 93.54MB/109.2MB 18:31:54 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 18:31:54 eabd8714fec9 Extracting [==============> ] 106.4MB/375MB 18:31:54 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 18:31:54 1e017ebebdbd Pull complete 18:31:54 e73cb4a42719 Extracting [=====================> ] 47.91MB/109.1MB 18:31:54 56aca8a42329 Downloading [===================> ] 28.65MB/71.91MB 18:31:54 c49e0ee60bfb Extracting [========> ] 17.83MB/107.3MB 18:31:54 55f2b468da67 Downloading [========================================> ] 209.8MB/257.9MB 18:31:54 09d5a3f70313 Downloading [===============================================> ] 104.3MB/109.2MB 18:31:54 eabd8714fec9 Extracting [==============> ] 110.3MB/375MB 18:31:54 1ccde423731d Pull complete 18:31:54 7221d93db8a9 Extracting [==================================================>] 100B/100B 18:31:54 7221d93db8a9 Extracting [==================================================>] 100B/100B 18:31:54 09d5a3f70313 Verifying Checksum 18:31:54 09d5a3f70313 Download complete 18:31:54 e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB 18:31:54 56aca8a42329 Downloading [==========================> ] 37.85MB/71.91MB 18:31:54 c49e0ee60bfb Extracting [=========> ] 21.17MB/107.3MB 18:31:54 55f2b468da67 Downloading [===========================================> ] 226MB/257.9MB 18:31:54 eabd8714fec9 Extracting [===============> ] 114.8MB/375MB 18:31:54 7221d93db8a9 Pull complete 18:31:54 56aca8a42329 Downloading [==================================> ] 50.28MB/71.91MB 18:31:54 7df673c7455d Extracting [==================================================>] 694B/694B 18:31:54 7df673c7455d Extracting [==================================================>] 694B/694B 18:31:54 e73cb4a42719 Extracting [========================> ] 53.48MB/109.1MB 18:31:54 c49e0ee60bfb Extracting [============> ] 26.18MB/107.3MB 18:31:54 55f2b468da67 Downloading [==============================================> ] 240.6MB/257.9MB 18:31:54 eabd8714fec9 Extracting [===============> ] 119.8MB/375MB 18:31:54 fbe227156a9a Downloading [> ] 146.4kB/14.63MB 18:31:54 c49e0ee60bfb Extracting [==============> ] 30.64MB/107.3MB 18:31:54 55f2b468da67 Download complete 18:31:54 e73cb4a42719 Extracting [=========================> ] 56.26MB/109.1MB 18:31:54 7df673c7455d Pull complete 18:31:54 eabd8714fec9 Extracting [================> ] 124.8MB/375MB 18:31:54 56aca8a42329 Downloading [============================================> ] 63.8MB/71.91MB 18:31:55 prometheus Pulled 18:31:55 fbe227156a9a Downloading [=======> ] 2.211MB/14.63MB 18:31:55 56aca8a42329 Download complete 18:31:55 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 18:31:55 c49e0ee60bfb Extracting [================> ] 36.21MB/107.3MB 18:31:55 e73cb4a42719 Extracting [===========================> ] 59.6MB/109.1MB 18:31:55 eabd8714fec9 Extracting [=================> ] 129.2MB/375MB 18:31:55 fbe227156a9a Downloading [=========================================> ] 12.09MB/14.63MB 18:31:55 fbe227156a9a Verifying Checksum 18:31:55 fbe227156a9a Download complete 18:31:55 b56567b07821 Downloading [==================================================>] 1.077kB/1.077kB 18:31:55 b56567b07821 Verifying Checksum 18:31:55 b56567b07821 Download complete 18:31:55 55f2b468da67 Extracting [==> ] 13.93MB/257.9MB 18:31:55 e73cb4a42719 Extracting [=============================> ] 64.06MB/109.1MB 18:31:55 56aca8a42329 Extracting [> ] 557.1kB/71.91MB 18:31:55 c49e0ee60bfb Extracting [==================> ] 39.55MB/107.3MB 18:31:55 eabd8714fec9 Extracting [=================> ] 133.7MB/375MB 18:31:55 55f2b468da67 Extracting [===> ] 20.61MB/257.9MB 18:31:55 e73cb4a42719 Extracting [===============================> ] 69.63MB/109.1MB 18:31:55 c49e0ee60bfb Extracting [===================> ] 41.78MB/107.3MB 18:31:55 7abf0dc59d35 Downloading [==================================================>] 1.035kB/1.035kB 18:31:55 7abf0dc59d35 Verifying Checksum 18:31:55 7abf0dc59d35 Download complete 18:31:55 56aca8a42329 Extracting [===> ] 4.456MB/71.91MB 18:31:55 eabd8714fec9 Extracting [==================> ] 137MB/375MB 18:31:55 e73cb4a42719 Extracting [=================================> ] 74.09MB/109.1MB 18:31:55 c49e0ee60bfb Extracting [=====================> ] 45.68MB/107.3MB 18:31:55 eabd8714fec9 Extracting [==================> ] 139.3MB/375MB 18:31:55 56aca8a42329 Extracting [=====> ] 8.356MB/71.91MB 18:31:55 f243361b999b Downloading [============================> ] 3.003kB/5.242kB 18:31:55 f243361b999b Download complete 18:31:55 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 18:31:55 5efc16ba9cdc Downloading [=======> ] 3.002kB/19.52kB 18:31:55 5efc16ba9cdc Downloading [==================================================>] 19.52kB/19.52kB 18:31:55 5efc16ba9cdc Verifying Checksum 18:31:55 5efc16ba9cdc Download complete 18:31:55 c49e0ee60bfb Extracting [=======================> ] 50.69MB/107.3MB 18:31:55 e73cb4a42719 Extracting [===================================> ] 77.43MB/109.1MB 18:31:55 eabd8714fec9 Extracting [===================> ] 142.6MB/375MB 18:31:55 56aca8a42329 Extracting [========> ] 12.26MB/71.91MB 18:31:55 991de477d40a Downloading [==================================================>] 1.035kB/1.035kB 18:31:55 991de477d40a Verifying Checksum 18:31:55 991de477d40a Download complete 18:31:55 55f2b468da67 Extracting [======> ] 35.65MB/257.9MB 18:31:55 c49e0ee60bfb Extracting [=========================> ] 54.59MB/107.3MB 18:31:55 e73cb4a42719 Extracting [=====================================> ] 80.77MB/109.1MB 18:31:55 56aca8a42329 Extracting [==========> ] 15.6MB/71.91MB 18:31:55 eabd8714fec9 Extracting [===================> ] 146.5MB/375MB 18:31:55 55f2b468da67 Extracting [=========> ] 49.02MB/257.9MB 18:31:55 c49e0ee60bfb Extracting [===========================> ] 58.49MB/107.3MB 18:31:55 e73cb4a42719 Extracting [======================================> ] 84.67MB/109.1MB 18:31:55 eabd8714fec9 Extracting [===================> ] 149.8MB/375MB 18:31:55 56aca8a42329 Extracting [=============> ] 19.5MB/71.91MB 18:31:55 55f2b468da67 Extracting [===========> ] 61.28MB/257.9MB 18:31:55 c49e0ee60bfb Extracting [============================> ] 61.83MB/107.3MB 18:31:55 e73cb4a42719 Extracting [========================================> ] 89.13MB/109.1MB 18:31:55 56aca8a42329 Extracting [=================> ] 24.51MB/71.91MB 18:31:55 eabd8714fec9 Extracting [====================> ] 152.6MB/375MB 18:31:55 55f2b468da67 Extracting [=============> ] 69.07MB/257.9MB 18:31:55 c49e0ee60bfb Extracting [==============================> ] 65.73MB/107.3MB 18:31:55 56aca8a42329 Extracting [===================> ] 28.41MB/71.91MB 18:31:55 e73cb4a42719 Extracting [==========================================> ] 92.47MB/109.1MB 18:31:55 55f2b468da67 Extracting [===============> ] 80.22MB/257.9MB 18:31:56 eabd8714fec9 Extracting [====================> ] 156.5MB/375MB 18:31:56 c49e0ee60bfb Extracting [===============================> ] 68.52MB/107.3MB 18:31:56 e73cb4a42719 Extracting [===========================================> ] 94.7MB/109.1MB 18:31:56 56aca8a42329 Extracting [======================> ] 32.31MB/71.91MB 18:31:56 55f2b468da67 Extracting [=================> ] 88.01MB/257.9MB 18:31:56 eabd8714fec9 Extracting [=====================> ] 160.4MB/375MB 18:31:56 c49e0ee60bfb Extracting [=================================> ] 71.86MB/107.3MB 18:31:56 55f2b468da67 Extracting [==================> ] 96.37MB/257.9MB 18:31:56 56aca8a42329 Extracting [=========================> ] 36.21MB/71.91MB 18:31:56 eabd8714fec9 Extracting [======================> ] 165.4MB/375MB 18:31:56 c49e0ee60bfb Extracting [===================================> ] 75.76MB/107.3MB 18:31:56 56aca8a42329 Extracting [==========================> ] 38.44MB/71.91MB 18:31:56 eabd8714fec9 Extracting [=======================> ] 175.5MB/375MB 18:31:56 55f2b468da67 Extracting [===================> ] 102.5MB/257.9MB 18:31:56 c49e0ee60bfb Extracting [=====================================> ] 79.66MB/107.3MB 18:31:56 56aca8a42329 Extracting [=============================> ] 42.34MB/71.91MB 18:31:56 eabd8714fec9 Extracting [========================> ] 186.6MB/375MB 18:31:56 55f2b468da67 Extracting [====================> ] 108.1MB/257.9MB 18:31:56 c49e0ee60bfb Extracting [======================================> ] 83.56MB/107.3MB 18:31:56 e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 18:31:56 56aca8a42329 Extracting [================================> ] 46.79MB/71.91MB 18:31:56 eabd8714fec9 Extracting [==========================> ] 200MB/375MB 18:31:56 55f2b468da67 Extracting [=====================> ] 112MB/257.9MB 18:31:56 c49e0ee60bfb Extracting [========================================> ] 86.34MB/107.3MB 18:31:56 e73cb4a42719 Extracting [=============================================> ] 99.71MB/109.1MB 18:31:56 56aca8a42329 Extracting [==================================> ] 50.14MB/71.91MB 18:31:56 eabd8714fec9 Extracting [===========================> ] 208.9MB/375MB 18:31:56 55f2b468da67 Extracting [======================> ] 115.3MB/257.9MB 18:31:56 c49e0ee60bfb Extracting [==========================================> ] 91.36MB/107.3MB 18:31:56 56aca8a42329 Extracting [=====================================> ] 53.48MB/71.91MB 18:31:56 eabd8714fec9 Extracting [============================> ] 216.7MB/375MB 18:31:56 e73cb4a42719 Extracting [===============================================> ] 103.1MB/109.1MB 18:31:56 55f2b468da67 Extracting [======================> ] 118.1MB/257.9MB 18:31:56 c49e0ee60bfb Extracting [=============================================> ] 97.48MB/107.3MB 18:31:56 56aca8a42329 Extracting [=======================================> ] 56.82MB/71.91MB 18:31:56 eabd8714fec9 Extracting [=============================> ] 220.6MB/375MB 18:31:56 e73cb4a42719 Extracting [================================================> ] 104.7MB/109.1MB 18:31:56 55f2b468da67 Extracting [=======================> ] 121.4MB/257.9MB 18:31:56 56aca8a42329 Extracting [==========================================> ] 61.28MB/71.91MB 18:31:56 eabd8714fec9 Extracting [==============================> ] 225.1MB/375MB 18:31:56 c49e0ee60bfb Extracting [==============================================> ] 100.3MB/107.3MB 18:31:56 e73cb4a42719 Extracting [=================================================> ] 107MB/109.1MB 18:31:56 55f2b468da67 Extracting [========================> ] 126.5MB/257.9MB 18:31:57 56aca8a42329 Extracting [=============================================> ] 65.18MB/71.91MB 18:31:57 eabd8714fec9 Extracting [==============================> ] 230.1MB/375MB 18:31:57 c49e0ee60bfb Extracting [================================================> ] 103.1MB/107.3MB 18:31:57 e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 18:31:57 55f2b468da67 Extracting [=========================> ] 129.8MB/257.9MB 18:31:57 56aca8a42329 Extracting [===============================================> ] 68.52MB/71.91MB 18:31:57 eabd8714fec9 Extracting [===============================> ] 234MB/375MB 18:31:57 e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 18:31:57 55f2b468da67 Extracting [=========================> ] 133.1MB/257.9MB 18:31:57 c49e0ee60bfb Extracting [================================================> ] 104.2MB/107.3MB 18:31:57 eabd8714fec9 Extracting [===============================> ] 239.5MB/375MB 18:31:57 56aca8a42329 Extracting [=================================================> ] 71.86MB/71.91MB 18:31:57 56aca8a42329 Extracting [==================================================>] 71.91MB/71.91MB 18:31:57 55f2b468da67 Extracting [==========================> ] 137MB/257.9MB 18:31:57 c49e0ee60bfb Extracting [=================================================> ] 105.3MB/107.3MB 18:31:57 eabd8714fec9 Extracting [================================> ] 243.4MB/375MB 18:31:57 c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB 18:31:57 55f2b468da67 Extracting [===========================> ] 141.5MB/257.9MB 18:31:57 eabd8714fec9 Extracting [=================================> ] 249MB/375MB 18:31:57 55f2b468da67 Extracting [============================> ] 147.1MB/257.9MB 18:31:57 55f2b468da67 Extracting [=============================> ] 150.4MB/257.9MB 18:31:57 eabd8714fec9 Extracting [=================================> ] 252.9MB/375MB 18:31:57 55f2b468da67 Extracting [==============================> ] 155.4MB/257.9MB 18:31:57 eabd8714fec9 Extracting [==================================> ] 258.5MB/375MB 18:31:58 55f2b468da67 Extracting [===============================> ] 160.4MB/257.9MB 18:31:58 eabd8714fec9 Extracting [===================================> ] 264MB/375MB 18:31:58 55f2b468da67 Extracting [================================> ] 167.1MB/257.9MB 18:31:58 eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB 18:31:58 eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB 18:31:58 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 18:31:58 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB 18:31:58 eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB 18:31:58 e73cb4a42719 Pull complete 18:31:58 c49e0ee60bfb Pull complete 18:31:58 56aca8a42329 Pull complete 18:31:58 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB 18:31:58 eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB 18:31:59 fbe227156a9a Extracting [> ] 163.8kB/14.63MB 18:31:59 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB 18:31:59 eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 18:31:59 384497dbce3b Extracting [> ] 557.1kB/63.48MB 18:31:59 55f2b468da67 Extracting [=================================> ] 173.2MB/257.9MB 18:31:59 fbe227156a9a Extracting [=> ] 327.7kB/14.63MB 18:31:59 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 18:31:59 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 18:31:59 eabd8714fec9 Extracting [====================================> ] 273MB/375MB 18:31:59 fbe227156a9a Extracting [========> ] 2.621MB/14.63MB 18:31:59 fbe227156a9a Extracting [==========> ] 3.113MB/14.63MB 18:31:59 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB 18:31:59 eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB 18:31:59 384497dbce3b Extracting [> ] 1.114MB/63.48MB 18:31:59 fbe227156a9a Extracting [===============> ] 4.424MB/14.63MB 18:31:59 55f2b468da67 Extracting [=================================> ] 174.4MB/257.9MB 18:31:59 fbe227156a9a Extracting [================> ] 4.915MB/14.63MB 18:31:59 a83b68436f09 Pull complete 18:31:59 eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB 18:32:00 55f2b468da67 Extracting [=================================> ] 174.9MB/257.9MB 18:32:00 fbe227156a9a Extracting [===================> ] 5.571MB/14.63MB 18:32:00 eabd8714fec9 Extracting [====================================> ] 275.7MB/375MB 18:32:00 787d6bee9571 Extracting [==================================================>] 127B/127B 18:32:00 787d6bee9571 Extracting [==================================================>] 127B/127B 18:32:00 fbe227156a9a Extracting [===========================> ] 8.192MB/14.63MB 18:32:00 55f2b468da67 Extracting [==================================> ] 175.5MB/257.9MB 18:32:00 384497dbce3b Extracting [=> ] 1.671MB/63.48MB 18:32:00 eabd8714fec9 Extracting [=====================================> ] 278.5MB/375MB 18:32:00 fbe227156a9a Extracting [==================================> ] 9.994MB/14.63MB 18:32:00 55f2b468da67 Extracting [==================================> ] 177.1MB/257.9MB 18:32:00 fbe227156a9a Extracting [=====================================> ] 10.98MB/14.63MB 18:32:00 eabd8714fec9 Extracting [=====================================> ] 281.3MB/375MB 18:32:00 384497dbce3b Extracting [=> ] 2.228MB/63.48MB 18:32:00 55f2b468da67 Extracting [==================================> ] 178.3MB/257.9MB 18:32:00 55f2b468da67 Extracting [==================================> ] 179.4MB/257.9MB 18:32:00 eabd8714fec9 Extracting [=====================================> ] 283MB/375MB 18:32:00 fbe227156a9a Extracting [========================================> ] 11.8MB/14.63MB 18:32:00 384497dbce3b Extracting [==> ] 2.785MB/63.48MB 18:32:00 55f2b468da67 Extracting [==================================> ] 180.5MB/257.9MB 18:32:00 eabd8714fec9 Extracting [======================================> ] 288MB/375MB 18:32:00 fbe227156a9a Extracting [==================================================>] 14.63MB/14.63MB 18:32:00 55f2b468da67 Extracting [===================================> ] 184.9MB/257.9MB 18:32:00 eabd8714fec9 Extracting [=======================================> ] 293MB/375MB 18:32:01 55f2b468da67 Extracting [=====================================> ] 191.6MB/257.9MB 18:32:01 eabd8714fec9 Extracting [=======================================> ] 295.2MB/375MB 18:32:01 55f2b468da67 Extracting [=====================================> ] 195MB/257.9MB 18:32:01 384497dbce3b Extracting [===> ] 3.899MB/63.48MB 18:32:01 eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB 18:32:01 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB 18:32:01 384497dbce3b Extracting [===> ] 4.456MB/63.48MB 18:32:01 eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 18:32:01 eabd8714fec9 Extracting [=======================================> ] 297.5MB/375MB 18:32:01 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB 18:32:01 787d6bee9571 Pull complete 18:32:01 fbe227156a9a Pull complete 18:32:02 13ff0988aaea Extracting [==================================================>] 167B/167B 18:32:02 13ff0988aaea Extracting [==================================================>] 167B/167B 18:32:02 eabd8714fec9 Extracting [=======================================> ] 298.6MB/375MB 18:32:02 55f2b468da67 Extracting [======================================> ] 197.2MB/257.9MB 18:32:02 384497dbce3b Extracting [===> ] 5.014MB/63.48MB 18:32:02 eabd8714fec9 Extracting [=======================================> ] 299.7MB/375MB 18:32:02 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB 18:32:02 b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB 18:32:02 b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB 18:32:02 55f2b468da67 Extracting [======================================> ] 200.5MB/257.9MB 18:32:02 eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB 18:32:02 384497dbce3b Extracting [======> ] 7.799MB/63.48MB 18:32:02 55f2b468da67 Extracting [======================================> ] 201.1MB/257.9MB 18:32:02 eabd8714fec9 Extracting [========================================> ] 302.5MB/375MB 18:32:03 384497dbce3b Extracting [=======> ] 9.47MB/63.48MB 18:32:03 384497dbce3b Extracting [=========> ] 12.26MB/63.48MB 18:32:03 eabd8714fec9 Extracting [========================================> ] 303.6MB/375MB 18:32:03 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB 18:32:03 384497dbce3b Extracting [============> ] 15.6MB/63.48MB 18:32:03 eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB 18:32:03 55f2b468da67 Extracting [=======================================> ] 204.4MB/257.9MB 18:32:03 384497dbce3b Extracting [=============> ] 16.71MB/63.48MB 18:32:03 eabd8714fec9 Extracting [========================================> ] 306.4MB/375MB 18:32:03 55f2b468da67 Extracting [=======================================> ] 205MB/257.9MB 18:32:03 384497dbce3b Extracting [=============> ] 17.27MB/63.48MB 18:32:03 eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB 18:32:03 55f2b468da67 Extracting [=======================================> ] 206.1MB/257.9MB 18:32:03 384497dbce3b Extracting [================> ] 21.17MB/63.48MB 18:32:03 384497dbce3b Extracting [===================> ] 24.51MB/63.48MB 18:32:03 eabd8714fec9 Extracting [=========================================> ] 307.5MB/375MB 18:32:04 55f2b468da67 Extracting [========================================> ] 206.7MB/257.9MB 18:32:04 384497dbce3b Extracting [=====================> ] 27.85MB/63.48MB 18:32:04 13ff0988aaea Pull complete 18:32:04 eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB 18:32:04 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB 18:32:04 384497dbce3b Extracting [========================> ] 30.64MB/63.48MB 18:32:04 eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB 18:32:04 55f2b468da67 Extracting [========================================> ] 208.3MB/257.9MB 18:32:04 eabd8714fec9 Extracting [=========================================> ] 310.8MB/375MB 18:32:04 384497dbce3b Extracting [=========================> ] 31.75MB/63.48MB 18:32:04 55f2b468da67 Extracting [=========================================> ] 211.7MB/257.9MB 18:32:04 eabd8714fec9 Extracting [=========================================> ] 313.1MB/375MB 18:32:04 b56567b07821 Pull complete 18:32:04 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 18:32:04 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 18:32:04 384497dbce3b Extracting [===========================> ] 34.54MB/63.48MB 18:32:04 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB 18:32:04 eabd8714fec9 Extracting [=========================================> ] 314.7MB/375MB 18:32:04 384497dbce3b Extracting [============================> ] 36.77MB/63.48MB 18:32:04 55f2b468da67 Extracting [==========================================> ] 216.7MB/257.9MB 18:32:04 eabd8714fec9 Extracting [==========================================> ] 318.6MB/375MB 18:32:04 384497dbce3b Extracting [==============================> ] 38.99MB/63.48MB 18:32:04 55f2b468da67 Extracting [==========================================> ] 221.2MB/257.9MB 18:32:04 384497dbce3b Extracting [=================================> ] 42.34MB/63.48MB 18:32:04 384497dbce3b Extracting [====================================> ] 46.24MB/63.48MB 18:32:05 384497dbce3b Extracting [=====================================> ] 47.35MB/63.48MB 18:32:05 55f2b468da67 Extracting [==========================================> ] 221.7MB/257.9MB 18:32:05 eabd8714fec9 Extracting [===========================================> ] 323.1MB/375MB 18:32:05 55f2b468da67 Extracting [===========================================> ] 222.8MB/257.9MB 18:32:05 384497dbce3b Extracting [=======================================> ] 49.58MB/63.48MB 18:32:05 f243361b999b Extracting [==================================================>] 5.242kB/5.242kB 18:32:05 f243361b999b Extracting [==================================================>] 5.242kB/5.242kB 18:32:05 eabd8714fec9 Extracting [===========================================> ] 324.8MB/375MB 18:32:05 384497dbce3b Extracting [=======================================> ] 50.69MB/63.48MB 18:32:05 384497dbce3b Extracting [==========================================> ] 54.59MB/63.48MB 18:32:05 55f2b468da67 Extracting [===========================================> ] 223.9MB/257.9MB 18:32:05 eabd8714fec9 Extracting [===========================================> ] 325.9MB/375MB 18:32:05 384497dbce3b Extracting [==============================================> ] 59.05MB/63.48MB 18:32:05 55f2b468da67 Extracting [===========================================> ] 225.6MB/257.9MB 18:32:05 4b82842ab819 Pull complete 18:32:05 eabd8714fec9 Extracting [===========================================> ] 327MB/375MB 18:32:05 7e568a0dc8fb Extracting [==================================================>] 184B/184B 18:32:05 7e568a0dc8fb Extracting [==================================================>] 184B/184B 18:32:05 f243361b999b Pull complete 18:32:05 384497dbce3b Extracting [==============================================> ] 59.6MB/63.48MB 18:32:05 eabd8714fec9 Extracting [===========================================> ] 327.5MB/375MB 18:32:05 55f2b468da67 Extracting [===========================================> ] 226.7MB/257.9MB 18:32:06 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 18:32:06 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 18:32:06 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB 18:32:06 eabd8714fec9 Extracting [===========================================> ] 328.7MB/375MB 18:32:06 55f2b468da67 Extracting [============================================> ] 227.8MB/257.9MB 18:32:06 384497dbce3b Extracting [=================================================> ] 62.39MB/63.48MB 18:32:06 7e568a0dc8fb Pull complete 18:32:06 eabd8714fec9 Extracting [===========================================> ] 329.2MB/375MB 18:32:06 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 18:32:06 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 18:32:07 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB 18:32:07 eabd8714fec9 Extracting [============================================> ] 330.9MB/375MB 18:32:07 7abf0dc59d35 Pull complete 18:32:07 eabd8714fec9 Extracting [============================================> ] 332MB/375MB 18:32:07 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB 18:32:07 eabd8714fec9 Extracting [============================================> ] 334.2MB/375MB 18:32:07 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB 18:32:07 eabd8714fec9 Extracting [=============================================> ] 338.1MB/375MB 18:32:07 55f2b468da67 Extracting [=============================================> ] 234.5MB/257.9MB 18:32:07 eabd8714fec9 Extracting [=============================================> ] 340.4MB/375MB 18:32:07 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB 18:32:07 eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 18:32:08 55f2b468da67 Extracting [==============================================> ] 238.4MB/257.9MB 18:32:08 eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 18:32:08 55f2b468da67 Extracting [==============================================> ] 241.8MB/257.9MB 18:32:08 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 18:32:08 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 18:32:08 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 18:32:08 eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 18:32:08 55f2b468da67 Extracting [=================================================> ] 252.9MB/257.9MB 18:32:08 384497dbce3b Pull complete 18:32:09 eabd8714fec9 Extracting [=============================================> ] 344.8MB/375MB 18:32:09 55f2b468da67 Extracting [=================================================> ] 255.1MB/257.9MB 18:32:09 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 18:32:09 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 18:32:09 eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB 18:32:09 eabd8714fec9 Extracting [==============================================> ] 350.9MB/375MB 18:32:09 eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 18:32:09 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 18:32:09 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 18:32:09 postgres Pulled 18:32:09 991de477d40a Pull complete 18:32:09 55f2b468da67 Pull complete 18:32:09 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 18:32:09 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 18:32:09 82bfc142787e Extracting [> ] 98.3kB/8.613MB 18:32:09 eabd8714fec9 Extracting [===============================================> ] 357.6MB/375MB 18:32:09 055b9255fa03 Pull complete 18:32:09 82bfc142787e Extracting [=====> ] 983kB/8.613MB 18:32:09 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 18:32:09 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 18:32:09 eabd8714fec9 Extracting [================================================> ] 365.4MB/375MB 18:32:09 5efc16ba9cdc Pull complete 18:32:09 policy-db-migrator Pulled 18:32:09 82bfc142787e Extracting [================================================> ] 8.356MB/8.613MB 18:32:09 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 18:32:09 eabd8714fec9 Extracting [=================================================> ] 369.3MB/375MB 18:32:09 eabd8714fec9 Extracting [=================================================> ] 373.2MB/375MB 18:32:09 eabd8714fec9 Extracting [==================================================>] 375MB/375MB 18:32:11 b176d7edde70 Pull complete 18:32:12 82bfc142787e Pull complete 18:32:13 eabd8714fec9 Pull complete 18:32:14 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 18:32:14 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 18:32:15 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 18:32:15 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 18:32:15 46baca71a4ef Pull complete 18:32:15 grafana Pulled 18:32:15 45fd2fec8a19 Pull complete 18:32:15 b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 18:32:15 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 18:32:15 b0e0ef7895f4 Extracting [==================> ] 13.76MB/37.01MB 18:32:15 8f10199ed94b Extracting [============================> ] 5.014MB/8.768MB 18:32:15 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 18:32:15 b0e0ef7895f4 Extracting [========================================> ] 30.28MB/37.01MB 18:32:15 8f10199ed94b Pull complete 18:32:15 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 18:32:15 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 18:32:15 b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 18:32:15 b0e0ef7895f4 Pull complete 18:32:15 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 18:32:15 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 18:32:15 f963a77d2726 Pull complete 18:32:15 c0c90eeb8aca Pull complete 18:32:15 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 18:32:15 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 18:32:15 f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 18:32:16 5cfb27c10ea5 Pull complete 18:32:16 f3a82e9f1761 Extracting [==============> ] 12.85MB/44.41MB 18:32:16 40a5eed61bb0 Extracting [==================================================>] 98B/98B 18:32:16 40a5eed61bb0 Extracting [==================================================>] 98B/98B 18:32:16 f3a82e9f1761 Extracting [=============================> ] 26.61MB/44.41MB 18:32:16 40a5eed61bb0 Pull complete 18:32:16 e040ea11fa10 Extracting [==================================================>] 173B/173B 18:32:16 e040ea11fa10 Extracting [==================================================>] 173B/173B 18:32:16 f3a82e9f1761 Extracting [=============================================> ] 40.83MB/44.41MB 18:32:16 f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 18:32:16 e040ea11fa10 Pull complete 18:32:16 f3a82e9f1761 Pull complete 18:32:16 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 18:32:16 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 18:32:16 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 18:32:16 79161a3f5362 Pull complete 18:32:16 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 18:32:16 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 18:32:16 09d5a3f70313 Extracting [======> ] 14.48MB/109.2MB 18:32:16 9c266ba63f51 Pull complete 18:32:16 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 18:32:16 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 18:32:16 09d5a3f70313 Extracting [==============> ] 30.64MB/109.2MB 18:32:16 09d5a3f70313 Extracting [=======================> ] 52.36MB/109.2MB 18:32:16 2e8a7df9c2ee Pull complete 18:32:16 10f05dd8b1db Extracting [==================================================>] 98B/98B 18:32:16 10f05dd8b1db Extracting [==================================================>] 98B/98B 18:32:16 09d5a3f70313 Extracting [==============================> ] 66.85MB/109.2MB 18:32:16 10f05dd8b1db Pull complete 18:32:16 41dac8b43ba6 Extracting [==================================================>] 171B/171B 18:32:16 41dac8b43ba6 Extracting [==================================================>] 171B/171B 18:32:16 09d5a3f70313 Extracting [======================================> ] 83.56MB/109.2MB 18:32:16 41dac8b43ba6 Pull complete 18:32:16 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 18:32:16 09d5a3f70313 Extracting [=============================================> ] 99.16MB/109.2MB 18:32:17 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 18:32:17 09d5a3f70313 Extracting [================================================> ] 106.4MB/109.2MB 18:32:17 71a9f6a9ab4d Pull complete 18:32:17 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 18:32:17 09d5a3f70313 Pull complete 18:32:17 da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 18:32:17 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 18:32:17 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 18:32:17 da3ed5db7103 Extracting [=====> ] 12.81MB/127.4MB 18:32:17 da3ed5db7103 Extracting [======> ] 17.83MB/127.4MB 18:32:17 356f5c2c843b Pull complete 18:32:17 kafka Pulled 18:32:17 da3ed5db7103 Extracting [=============> ] 35.09MB/127.4MB 18:32:17 da3ed5db7103 Extracting [====================> ] 53.48MB/127.4MB 18:32:17 da3ed5db7103 Extracting [============================> ] 71.86MB/127.4MB 18:32:17 da3ed5db7103 Extracting [===================================> ] 90.8MB/127.4MB 18:32:17 da3ed5db7103 Extracting [==========================================> ] 108.1MB/127.4MB 18:32:17 da3ed5db7103 Extracting [==============================================> ] 119.8MB/127.4MB 18:32:18 da3ed5db7103 Extracting [=================================================> ] 125.3MB/127.4MB 18:32:18 da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB 18:32:18 da3ed5db7103 Pull complete 18:32:18 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 18:32:18 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 18:32:18 c955f6e31a04 Pull complete 18:32:18 zookeeper Pulled 18:32:18 Network compose_default Creating 18:32:18 Network compose_default Created 18:32:18 Container prometheus Creating 18:32:18 Container zookeeper Creating 18:32:18 Container postgres Creating 18:32:32 Container prometheus Created 18:32:32 Container grafana Creating 18:32:32 Container postgres Created 18:32:32 Container policy-db-migrator Creating 18:32:32 Container zookeeper Created 18:32:32 Container kafka Creating 18:32:32 Container policy-db-migrator Created 18:32:32 Container policy-api Creating 18:32:32 Container kafka Created 18:32:32 Container grafana Created 18:32:32 Container policy-api Created 18:32:32 Container policy-pap Creating 18:32:33 Container policy-pap Created 18:32:33 Container policy-xacml-pdp Creating 18:32:33 Container policy-xacml-pdp Created 18:32:33 Container zookeeper Starting 18:32:33 Container postgres Starting 18:32:33 Container prometheus Starting 18:32:34 Container prometheus Started 18:32:34 Container grafana Starting 18:32:34 Container grafana Started 18:32:35 Container zookeeper Started 18:32:35 Container kafka Starting 18:32:35 Container kafka Started 18:32:37 Container postgres Started 18:32:37 Container policy-db-migrator Starting 18:32:38 Container policy-db-migrator Started 18:32:38 Container policy-api Starting 18:32:39 Container policy-api Started 18:32:39 Container policy-pap Starting 18:32:40 Container policy-pap Started 18:32:40 Container policy-xacml-pdp Starting 18:32:41 Container policy-xacml-pdp Started 18:32:41 Prometheus server: http://localhost:30259 18:32:41 Grafana server: http://localhost:30269 18:32:41 Waiting 1 minute for xacml-pdp to start... 18:33:41 Checking if REST port 30004 is open on localhost ... 18:33:41 IMAGE NAMES STATUS 18:33:41 nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute 18:33:41 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute 18:33:41 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute 18:33:41 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute 18:33:41 nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute 18:33:41 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute 18:33:41 nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute 18:33:41 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute 18:33:41 Cloning into '/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/csit/resources/tests/models'... 18:33:42 Building robot framework docker image 18:34:28 sha256:c119f827bf8fed732d3a18613f99e98c12ea4366ece1ba503dcb02d6e7300c32 18:34:32 top - 18:34:32 up 4 min, 0 users, load average: 2.18, 1.68, 0.73 18:34:32 Tasks: 229 total, 1 running, 151 sleeping, 0 stopped, 0 zombie 18:34:32 %Cpu(s): 13.9 us, 3.2 sy, 0.0 ni, 77.6 id, 5.2 wa, 0.0 hi, 0.1 si, 0.1 st 18:34:32 18:34:32 total used free shared buff/cache available 18:34:32 Mem: 31G 2.6G 21G 27M 7.1G 28G 18:34:32 Swap: 1.0G 0B 1.0G 18:34:32 18:34:32 IMAGE NAMES STATUS 18:34:32 nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute 18:34:32 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute 18:34:32 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute 18:34:32 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute 18:34:32 nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute 18:34:32 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute 18:34:32 nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute 18:34:32 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute 18:34:32 18:34:34 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 18:34:34 c27d38409cc0 policy-xacml-pdp 1.00% 173.4MiB / 31.41GiB 0.54% 45.8kB / 57kB 0B / 4.1kB 51 18:34:34 c4aa1862b7a5 policy-pap 0.62% 582.4MiB / 31.41GiB 1.81% 2.14MB / 1.06MB 0B / 139MB 68 18:34:34 37f5a4852278 policy-api 0.10% 418MiB / 31.41GiB 1.30% 1.15MB / 986kB 0B / 0B 57 18:34:34 a41c282ff82c kafka 1.01% 379.7MiB / 31.41GiB 1.18% 189kB / 177kB 0B / 586kB 83 18:34:34 0207464160b4 grafana 0.15% 104.4MiB / 31.41GiB 0.32% 19.1MB / 172kB 0B / 30.3MB 22 18:34:34 69f405dd3242 zookeeper 0.06% 84.02MiB / 31.41GiB 0.26% 53.8kB / 45.4kB 0B / 406kB 63 18:34:34 ea2fa00b4b70 postgres 0.00% 85.7MiB / 31.41GiB 0.27% 2.56MB / 3.74MB 225kB / 157MB 26 18:34:34 a09ea37f64d6 prometheus 1.63% 21.17MiB / 31.41GiB 0.07% 63.6kB / 3.72kB 4.1kB / 0B 12 18:34:34 18:34:34 Container policy-csit Creating 18:34:35 Container policy-csit Created 18:34:35 Attaching to policy-csit 18:34:35 policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot 18:34:35 policy-csit | Run Robot test 18:34:35 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 18:34:35 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 18:34:35 policy-csit | -v POLICY_API_IP:policy-api:6969 18:34:35 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 18:34:35 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 18:34:35 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 18:34:35 policy-csit | -v APEX_IP:policy-apex-pdp:6969 18:34:35 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 18:34:35 policy-csit | -v KAFKA_IP:kafka:9092 18:34:35 policy-csit | -v PROMETHEUS_IP:prometheus:9090 18:34:35 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 18:34:35 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 18:34:35 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 18:34:35 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 18:34:35 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 18:34:35 policy-csit | -v TEMP_FOLDER:/tmp/distribution 18:34:35 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 18:34:35 policy-csit | -v TEST_ENV:docker 18:34:35 policy-csit | -v JAEGER_IP:jaeger:16686 18:34:35 policy-csit | Starting Robot test suites ... 18:34:36 policy-csit | ============================================================================== 18:34:36 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas 18:34:36 policy-csit | ============================================================================== 18:34:36 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test 18:34:36 policy-csit | ============================================================================== 18:34:36 policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | 18:34:36 policy-csit | ------------------------------------------------------------------------------ 18:34:36 policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | 18:34:36 policy-csit | ------------------------------------------------------------------------------ 18:34:36 policy-csit | MakeTopics :: Creates the Policy topics | PASS | 18:34:36 policy-csit | ------------------------------------------------------------------------------ 18:35:04 policy-csit | ExecuteXacmlPolicy | PASS | 18:35:04 policy-csit | ------------------------------------------------------------------------------ 18:35:04 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | 18:35:04 policy-csit | 4 tests, 4 passed, 0 failed 18:35:04 policy-csit | ============================================================================== 18:35:04 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas 18:35:04 policy-csit | ============================================================================== 18:36:04 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 18:36:04 policy-csit | ------------------------------------------------------------------------------ 18:36:04 policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | 18:36:04 policy-csit | ------------------------------------------------------------------------------ 18:36:04 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | 18:36:04 policy-csit | 2 tests, 2 passed, 0 failed 18:36:04 policy-csit | ============================================================================== 18:36:04 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | 18:36:04 policy-csit | 6 tests, 6 passed, 0 failed 18:36:04 policy-csit | ============================================================================== 18:36:04 policy-csit | Output: /tmp/results/output.xml 18:36:04 policy-csit | Log: /tmp/results/log.html 18:36:04 policy-csit | Report: /tmp/results/report.html 18:36:04 policy-csit | RESULT: 0 18:36:05 policy-csit exited with code 0 18:36:05 IMAGE NAMES STATUS 18:36:05 nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up 3 minutes 18:36:05 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 18:36:05 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 18:36:05 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 18:36:05 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 18:36:05 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 18:36:05 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 18:36:05 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 18:36:05 Shut down started! 18:36:06 Collecting logs from docker compose containers... 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314420367Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-15T18:32:35Z 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314704423Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314711074Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314714674Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314717754Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314720494Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314723194Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314726264Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314729144Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314731814Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314734404Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314737894Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314740694Z level=info msg=Target target=[all] 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314745874Z level=info msg="Path Home" path=/usr/share/grafana 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314748504Z level=info msg="Path Data" path=/var/lib/grafana 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314751155Z level=info msg="Path Logs" path=/var/log/grafana 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314753655Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314756305Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 18:36:07 grafana | logger=settings t=2025-06-15T18:32:35.314758955Z level=info msg="App mode production" 18:36:07 grafana | logger=featuremgmt t=2025-06-15T18:32:35.315113013Z level=info msg=FeatureToggles ssoSettingsApi=true prometheusAzureOverrideAudience=true logsInfiniteScrolling=true newFiltersUI=true promQLScope=true logRowsPopoverMenu=true unifiedStorageSearchPermissionFiltering=true azureMonitorEnableUserAuth=true alertRuleRestore=true onPremToCloudMigrations=true newDashboardSharingComponent=true dataplaneFrontendFallback=true pluginsDetailsRightPanel=true prometheusUsesCombobox=true kubernetesClientDashboardsFolders=true transformationsRedesign=true nestedFolders=true logsContextDatasourceUi=true cloudWatchRoundUpEndTime=true lokiLabelNamesQueryApi=true logsExploreTableVisualisation=true lokiQuerySplitting=true cloudWatchNewLabelParsing=true externalCorePlugins=true alertingUIOptimizeReducer=true recoveryThreshold=true grafanaconThemes=true alertingSimplifiedRouting=true groupToNestedTableTransformation=true reportingUseRawTimeRange=true pinNavItems=true lokiStructuredMetadata=true preinstallAutoUpdate=true alertingInsights=true newPDFRendering=true lokiQueryHints=true alertingRulePermanentlyDelete=true dashgpt=true useSessionStorageForRedirection=true logsPanelControls=true addFieldFromCalculationStatFunctions=true kubernetesPlaylists=true cloudWatchCrossAccountQuerying=true formatString=true awsAsyncQueryCaching=true alertingApiServer=true azureMonitorPrometheusExemplars=true angularDeprecationUI=true influxdbBackendMigration=true failWrongDSUID=true alertingRuleRecoverDeleted=true recordedQueriesMulti=true alertingRuleVersionHistoryRestore=true dashboardSceneSolo=true correlations=true alertingNotificationsStepMode=true publicDashboardsScene=true dashboardScene=true dashboardSceneForViewers=true unifiedRequestLog=true ssoSettingsSAML=true annotationPermissionUpdate=true tlsMemcached=true alertingQueryAndExpressionsStepMode=true panelMonitoring=true 18:36:07 grafana | logger=sqlstore t=2025-06-15T18:32:35.315172455Z level=info msg="Connecting to DB" dbtype=sqlite3 18:36:07 grafana | logger=sqlstore t=2025-06-15T18:32:35.315190265Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.316593Z level=info msg="Locking database" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.31660535Z level=info msg="Starting DB migrations" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.317222256Z level=info msg="Executing migration" id="create migration_log table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.318026635Z level=info msg="Migration successfully executed" id="create migration_log table" duration=804.149µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.321802849Z level=info msg="Executing migration" id="create user table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.322645481Z level=info msg="Migration successfully executed" id="create user table" duration=841.902µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.326168917Z level=info msg="Executing migration" id="add unique index user.login" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.326833795Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=664.527µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.333267903Z level=info msg="Executing migration" id="add unique index user.email" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.334222447Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=957.084µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.33917753Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.339848807Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=673.407µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.343173139Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.343808385Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=634.746µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.347135677Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.349551288Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.41523ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.354198772Z level=info msg="Executing migration" id="create user table v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.354989672Z level=info msg="Migration successfully executed" id="create user table v2" duration=790.33µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.358925459Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.359719699Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=791.22µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.362532429Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.363216847Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=684.078µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.366315553Z level=info msg="Executing migration" id="copy data_source v1 to v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.366666432Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=350.709µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.371916492Z level=info msg="Executing migration" id="Drop old table user_v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.372891836Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=977.603µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.378482584Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.379586122Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.103008ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.382391551Z level=info msg="Executing migration" id="Update user table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.382414992Z level=info msg="Migration successfully executed" id="Update user table charset" duration=23.921µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.384982976Z level=info msg="Executing migration" id="Add last_seen_at column to user" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.385988511Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.004975ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.389223231Z level=info msg="Executing migration" id="Add missing user data" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.393490397Z level=info msg="Migration successfully executed" id="Add missing user data" duration=4.260585ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.407293889Z level=info msg="Executing migration" id="Add is_disabled column to user" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.408724814Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.432915ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.413582494Z level=info msg="Executing migration" id="Add index user.login/user.email" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.414157409Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=574.545µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.41744706Z level=info msg="Executing migration" id="Add is_service_account column to user" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.418315912Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=868.542µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.421246215Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.43235466Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=11.108835ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.469402509Z level=info msg="Executing migration" id="Add uid column to user" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.471409279Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=2.007261ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.476941106Z level=info msg="Executing migration" id="Update uid column values for users" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.477436818Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=494.822µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.4811189Z level=info msg="Executing migration" id="Add unique index user_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.482475063Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.355973ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.486075442Z level=info msg="Executing migration" id="Add is_provisioned column to user" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.488215806Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=2.139293ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.491555328Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.491979528Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=423.65µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.497394773Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.49891601Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=1.520498ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.503350621Z level=info msg="Executing migration" id="update login and email fields to lowercase" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.504251163Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=899.402µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.50858004Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.509130144Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=549.414µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.514019465Z level=info msg="Executing migration" id="create temp user table v1-7" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.515494941Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.474356ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.520087656Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.521643824Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.555118ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.526367951Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.527492589Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.123328ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.532354549Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.533785535Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.430506ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.538746018Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.540103702Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.357254ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.544840319Z level=info msg="Executing migration" id="Update temp_user table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.544962522Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=122.253µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.549159266Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.550038248Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=878.202µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.554692253Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.556406326Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.713782ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.56018531Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.561507342Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.322663ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.565324377Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.566187458Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=864.831µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.570177727Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.573459329Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.280762ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.576803422Z level=info msg="Executing migration" id="create temp_user v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.577772516Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=968.664µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.581139989Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.582232286Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.091717ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.586655336Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.58761484Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=961.804µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.591526167Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.592533311Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.006594ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.596347916Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.597736711Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.388175ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.603248978Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.603996526Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=744.738µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.60780175Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.609055701Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=1.253361ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.613526152Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.61422449Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=699.568µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.618086756Z level=info msg="Executing migration" id="create star table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.619315065Z level=info msg="Migration successfully executed" id="create star table" duration=1.222749ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.624715859Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.626023912Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.307313ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.62997725Z level=info msg="Executing migration" id="Add column dashboard_uid in star" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.631529718Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.548748ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.635147689Z level=info msg="Executing migration" id="Add column org_id in star" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.637164038Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=2.015409ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.641991818Z level=info msg="Executing migration" id="Add column updated in star" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.643553057Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.560609ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.647188957Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.648094229Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=904.412µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.651958675Z level=info msg="Executing migration" id="create org table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.652844287Z level=info msg="Migration successfully executed" id="create org table v1" duration=886.272µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.656491288Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.657355989Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=864.412µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.661723658Z level=info msg="Executing migration" id="create org_user table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.663110792Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.384705ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.667010218Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.668332371Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.321613ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.67190673Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.672823773Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=918.533µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.676072753Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.677003096Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=929.143µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.681649141Z level=info msg="Executing migration" id="Update org table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.681750284Z level=info msg="Migration successfully executed" id="Update org table charset" duration=100.363µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.687189919Z level=info msg="Executing migration" id="Update org_user table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.687377203Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=186.574µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.691385723Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.691833214Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=446.66µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.695575257Z level=info msg="Executing migration" id="create dashboard table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.697044733Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.469676ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.701649467Z level=info msg="Executing migration" id="add index dashboard.account_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.702620111Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=970.094µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.706104847Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.706787025Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=681.858µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.709944223Z level=info msg="Executing migration" id="create dashboard_tag table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.710515697Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=568.954µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.713566373Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.714319011Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=749.328µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.718177697Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.719031869Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=854.061µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.722356741Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.729511178Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=7.190348ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.733792544Z level=info msg="Executing migration" id="create dashboard v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.73443321Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=640.056µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.738210323Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.738831509Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=620.656µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.742366266Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.743777141Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.409635ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.747500954Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.748403667Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=900.623µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.791271689Z level=info msg="Executing migration" id="drop table dashboard_v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.793519285Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=2.244436ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.799167805Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.799280958Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=115.743µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.802887918Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.80541283Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.524041ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.810382383Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.813512551Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.130998ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.81711872Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.81994232Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.82632ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.823699383Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.82477874Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.080077ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.829268501Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.831728092Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.458921ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.835517066Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.836559683Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.040756ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.840159141Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.841072654Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=913.303µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.846043487Z level=info msg="Executing migration" id="Update dashboard table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.846193841Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=148.903µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.850009435Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.850159409Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=149.104µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.853966924Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.858539997Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=4.572033ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.863225154Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.865305215Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.078111ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.868962026Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.870992626Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.02698ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.874968964Z level=info msg="Executing migration" id="Add column uid in dashboard" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.877038895Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.068771ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.881868906Z level=info msg="Executing migration" id="Update uid column values in dashboard" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.882274645Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=408.59µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.886289955Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.887980557Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.687012ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.891789241Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.893127995Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.338314ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.897978304Z level=info msg="Executing migration" id="Update dashboard title length" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.898001625Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=24.111µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.902569579Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.903862441Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.292752ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.907868481Z level=info msg="Executing migration" id="create dashboard_provisioning" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.909731327Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.862195ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.918287239Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.923900118Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.61565ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.927508948Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.928240375Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=731.487µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.931165248Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.931976138Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=810.41µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.937434314Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.938474849Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.039825ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.941569266Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.941906844Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=337.278µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.945015231Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.945528833Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=513.232µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.950982589Z level=info msg="Executing migration" id="Add check_sum column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.954300812Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.316523ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.957998933Z level=info msg="Executing migration" id="Add index for dashboard_title" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.959174802Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.175299ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.962429592Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.962659389Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=229.707µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.96593575Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.966155085Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=218.195µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.971134149Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.971880267Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=743.698µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.974747458Z level=info msg="Executing migration" id="Add isPublic for dashboard" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.976889162Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.141374ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.979820444Z level=info msg="Executing migration" id="Add deleted for dashboard" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.982261035Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.453571ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.987439373Z level=info msg="Executing migration" id="Add index for deleted" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.988183002Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=743.779µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.991533744Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.993734259Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.201575ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.996848506Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:35.998943619Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.094773ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.003608274Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.004144207Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=535.893µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.008140836Z level=info msg="Executing migration" id="Add apiVersion for dashboard" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.010367981Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.226635ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.014065272Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.014611636Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=546.055µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.018793969Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.019159038Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=363.398µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.023091355Z level=info msg="Executing migration" id="create data_source table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.024445188Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.353753ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.028526159Z level=info msg="Executing migration" id="add index data_source.account_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.029717679Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.19212ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.034191849Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.034939387Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=745.608µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.038391142Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.039090639Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=698.907µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.04232427Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.043025226Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=700.776µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.047666501Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.05772479Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=10.058489ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.061301977Z level=info msg="Executing migration" id="create data_source table v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.062268151Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=965.514µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.06586391Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.066628939Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=762.989µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.070993807Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.077448026Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=6.453749ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.11690999Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.118223942Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.309682ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.123488452Z level=info msg="Executing migration" id="Add column with_credentials" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.125894891Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.405919ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.12946769Z level=info msg="Executing migration" id="Add secure json data column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.131953391Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.485841ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.135605401Z level=info msg="Executing migration" id="Update data_source table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.135631581Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=26.83µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.139924147Z level=info msg="Executing migration" id="Update initial version to 1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.140108432Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=184.185µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.144233213Z level=info msg="Executing migration" id="Add read_only data column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.150031196Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=5.796933ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.154184179Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.154364624Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=180.585µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.158076325Z level=info msg="Executing migration" id="Update json_data with nulls" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.158240819Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=164.514µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.162942396Z level=info msg="Executing migration" id="Add uid column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.167036407Z level=info msg="Migration successfully executed" id="Add uid column" duration=4.093141ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.170633995Z level=info msg="Executing migration" id="Update uid value" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.170873171Z level=info msg="Migration successfully executed" id="Update uid value" duration=238.856µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.174233174Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.175110385Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=876.781µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.178550231Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.17934562Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=794.629µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.183311778Z level=info msg="Executing migration" id="Add is_prunable column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.186813915Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=3.500317ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.190786753Z level=info msg="Executing migration" id="Add api_version column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.19475066Z level=info msg="Migration successfully executed" id="Add api_version column" duration=3.963127ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.19879619Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.198857921Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=62.861µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.204616593Z level=info msg="Executing migration" id="create api_key table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.206304266Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.687763ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.210769676Z level=info msg="Executing migration" id="add index api_key.account_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.212130959Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.360603ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.216177839Z level=info msg="Executing migration" id="add index api_key.key" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.216909367Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=731.438µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.22189439Z level=info msg="Executing migration" id="add index api_key.account_id_name" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.22310567Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.20948ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.226879153Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.228125554Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.24315ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.232518232Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.233463215Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=944.253µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.238321336Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.239018073Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=696.556µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.242513469Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.24946127Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.947031ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.253322225Z level=info msg="Executing migration" id="create api_key table v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.254016853Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=694.238µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.258528714Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.259495848Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=966.814µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.263445505Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.264979884Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.532498ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.269747411Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.270483439Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=735.488µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.274158269Z level=info msg="Executing migration" id="copy api_key v1 to v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.274456597Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=294.588µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.278115637Z level=info msg="Executing migration" id="Drop old table api_key_v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.278880046Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=763.729µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.283780647Z level=info msg="Executing migration" id="Update api_key table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.283816188Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=36.691µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.287639042Z level=info msg="Executing migration" id="Add expires to api_key table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.291938298Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.298056ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.296337687Z level=info msg="Executing migration" id="Add service account foreign key" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.298127961Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.790054ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.301700359Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.301856143Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=155.684µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.305920124Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.308465066Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.542643ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.312537567Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.315847278Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=3.307611ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.32036092Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.321034887Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=672.218µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.325890346Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.326667346Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=775.29µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.331221198Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.332434298Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.21272ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.336634321Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.337485052Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=850.411µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.34347153Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.345243834Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.772564ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.350802591Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.352103273Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.301092ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.356070051Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.356086642Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=14.82µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.360690345Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.360837338Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=146.433µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.365685018Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.370551898Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.865501ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.376006863Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.378915715Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.908181ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.383027536Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.383120628Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=93.292µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.386768039Z level=info msg="Executing migration" id="create quota table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.387681031Z level=info msg="Migration successfully executed" id="create quota table v1" duration=912.651µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.458403636Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.460050217Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.64885ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.463611454Z level=info msg="Executing migration" id="Update quota table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.463687726Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=72.352µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.466822394Z level=info msg="Executing migration" id="create plugin_setting table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.467535021Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=712.487µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.472689988Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.474214586Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.524018ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.478348718Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.48207266Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.724332ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.485522675Z level=info msg="Executing migration" id="Update plugin_setting table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.485709059Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=186.434µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.490840426Z level=info msg="Executing migration" id="update NULL org_id to 1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.491320858Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=480.522µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.494752123Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.504842272Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=10.089579ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.508348718Z level=info msg="Executing migration" id="create session table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.50924752Z level=info msg="Migration successfully executed" id="create session table" duration=896.452µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.514881099Z level=info msg="Executing migration" id="Drop old table playlist table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.515163326Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=292.617µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.51853168Z level=info msg="Executing migration" id="Drop old table playlist_item table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.518775546Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=246.796µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.522444906Z level=info msg="Executing migration" id="create playlist table v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.523756338Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.310963ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.527665715Z level=info msg="Executing migration" id="create playlist item table v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.528625849Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=960.544µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.533804887Z level=info msg="Executing migration" id="Update playlist table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.533994411Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=189.804µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.536970925Z level=info msg="Executing migration" id="Update playlist_item table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.537075247Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=104.072µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.539530878Z level=info msg="Executing migration" id="Add playlist column created_at" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.542731637Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.199809ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.545880304Z level=info msg="Executing migration" id="Add playlist column updated_at" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.549178056Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.297052ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.554400914Z level=info msg="Executing migration" id="drop preferences table v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.55464819Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=246.306µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.558874914Z level=info msg="Executing migration" id="drop preferences table v3" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.559165082Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=289.718µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.562472274Z level=info msg="Executing migration" id="create preferences table v3" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.56392822Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.455635ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.569382734Z level=info msg="Executing migration" id="Update preferences table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.56961034Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=226.196µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.573773212Z level=info msg="Executing migration" id="Add column team_id in preferences" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.577092854Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.318432ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.58057793Z level=info msg="Executing migration" id="Update team_id column values in preferences" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.580891528Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=313.708µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.586066276Z level=info msg="Executing migration" id="Add column week_start in preferences" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.590815583Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.747317ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.59432397Z level=info msg="Executing migration" id="Add column preferences.json_data" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.598281668Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.955268ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.601871006Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.601959438Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=89.042µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.605869804Z level=info msg="Executing migration" id="Add preferences index org_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.606839109Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=969.305µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.613888953Z level=info msg="Executing migration" id="Add preferences index user_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.615523833Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.636501ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.619103921Z level=info msg="Executing migration" id="create alert table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.620827714Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.722944ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.624291369Z level=info msg="Executing migration" id="add index alert org_id & id " 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.625276553Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=986.594µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.630534193Z level=info msg="Executing migration" id="add index alert state" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.631553058Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.018435ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.635173498Z level=info msg="Executing migration" id="add index alert dashboard_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.636153492Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=979.594µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.639814662Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.64094201Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.125808ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.646286092Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.647939362Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.65193ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.651529461Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.652487464Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=957.173µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.655814337Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.665533007Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.71799ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.671069914Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.671861863Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=791.249µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.675535164Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.676541839Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.006165ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.679776388Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.680214439Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=430.281µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.684575726Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.685189652Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=613.586µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.688893143Z level=info msg="Executing migration" id="create alert_notification table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.690322548Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.428985ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.693936288Z level=info msg="Executing migration" id="Add column is_default" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.69849057Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.554853ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.703707969Z level=info msg="Executing migration" id="Add column frequency" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.707370509Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.6619ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.710973468Z level=info msg="Executing migration" id="Add column send_reminder" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.714670519Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.693781ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.718677178Z level=info msg="Executing migration" id="Add column disable_resolve_message" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.722384009Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.706681ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.726776777Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.727739182Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=961.794µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.731132636Z level=info msg="Executing migration" id="Update alert table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.7312868Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=153.443µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.735082922Z level=info msg="Executing migration" id="Update alert_notification table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.735298928Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=214.936µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.740884615Z level=info msg="Executing migration" id="create notification_journal table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.74227188Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.386425ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.813783585Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.814546473Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=762.358µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.818908791Z level=info msg="Executing migration" id="drop alert_notification_journal" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.819554937Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=645.796µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.824483619Z level=info msg="Executing migration" id="create alert_notification_state table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.825843922Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.359814ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.830178249Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.831276616Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.097877ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.835332666Z level=info msg="Executing migration" id="Add for to alert table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.84115628Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=5.822024ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.846102873Z level=info msg="Executing migration" id="Add column uid in alert_notification" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.849974437Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.872215ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.853684529Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.854006407Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=321.788µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.85735047Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.858290933Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=939.963µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.862824775Z level=info msg="Executing migration" id="Remove unique index org_id_name" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.86424847Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.424885ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.867748556Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.87318403Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.435854ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.877120938Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.87720681Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=86.362µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.882670125Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.883666389Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=995.904µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.887138385Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.888187651Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.049156ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.89182799Z level=info msg="Executing migration" id="Drop old annotation table v4" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.892074206Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=246.106µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.896543267Z level=info msg="Executing migration" id="create annotation table v5" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.898250649Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.707432ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.902514574Z level=info msg="Executing migration" id="add index annotation 0 v3" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.904198796Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.683611ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.908468621Z level=info msg="Executing migration" id="add index annotation 1 v3" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.909408004Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=939.483µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.914080309Z level=info msg="Executing migration" id="add index annotation 2 v3" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.915021123Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=940.454µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.918912389Z level=info msg="Executing migration" id="add index annotation 3 v3" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.920776255Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.862806ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.92461692Z level=info msg="Executing migration" id="add index annotation 4 v3" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.925604313Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=987.093µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.930888064Z level=info msg="Executing migration" id="Update annotation table charset" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.931089369Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=201.445µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.934850392Z level=info msg="Executing migration" id="Add column region_id to annotation table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.941361453Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.510061ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.94492549Z level=info msg="Executing migration" id="Drop category_id index" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.945868524Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=942.784µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.950632191Z level=info msg="Executing migration" id="Add column tags to annotation table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.955072231Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.397948ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.958417554Z level=info msg="Executing migration" id="Create annotation_tag table v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.959212673Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=790.019µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.962577846Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.963555871Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=977.345µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.968525413Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.969394305Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=868.012µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.973489806Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.988153887Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.664271ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.992629127Z level=info msg="Executing migration" id="Create annotation_tag table v3" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.993336295Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=706.618µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.997831626Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:36.999006165Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.174259ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.002979583Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.003460155Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=480.652µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.007344191Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.008067688Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=723.397µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.012461647Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.0129587Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=496.892µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.017004989Z level=info msg="Executing migration" id="Add created time to annotation table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.022491335Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.488476ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.026708759Z level=info msg="Executing migration" id="Add updated time to annotation table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.031159129Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.4496ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.035133347Z level=info msg="Executing migration" id="Add index for created in annotation table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.036209703Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.076206ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.04096291Z level=info msg="Executing migration" id="Add index for updated in annotation table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.042615521Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.651721ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.046956128Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.047550002Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=593.694µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.051842059Z level=info msg="Executing migration" id="Add epoch_end column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.056444802Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.601723ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.061026715Z level=info msg="Executing migration" id="Add index for epoch_end" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.062142843Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.115508ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.067530786Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.068045328Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=518.412µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.072393496Z level=info msg="Executing migration" id="Move region to single row" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.073230327Z level=info msg="Migration successfully executed" id="Move region to single row" duration=831.89µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.091818615Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.092701347Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=884.002µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.139217865Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.141545363Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=2.325227ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.149033177Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.150049362Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.017865ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.153694572Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.154691556Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=996.194µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.16414129Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.165160274Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.020954ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.170815805Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.171985013Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.170958ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.175981872Z level=info msg="Executing migration" id="Increase tags column to length 4096" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.176002832Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=22.12µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.182208906Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.182234716Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=26.78µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.186190044Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.186218485Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=29.931µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.19088301Z level=info msg="Executing migration" id="create test_data table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.191539235Z level=info msg="Migration successfully executed" id="create test_data table" duration=656.445µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.195837721Z level=info msg="Executing migration" id="create dashboard_version table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.196410056Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=571.925µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.200960718Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.202026775Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.068367ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.207229793Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.208092444Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=862.441µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.210959195Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.21118507Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=225.915µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.216505252Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.21687343Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=365.858µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.22251799Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.2225347Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=20.04µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.226117509Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.231709437Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=5.593518ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.235725886Z level=info msg="Executing migration" id="create team table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.236559956Z level=info msg="Migration successfully executed" id="create team table" duration=834.37µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.243048147Z level=info msg="Executing migration" id="add index team.org_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.244596585Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.551049ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.248140583Z level=info msg="Executing migration" id="add unique index team_org_id_name" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.249542067Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.401004ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.254376856Z level=info msg="Executing migration" id="Add column uid in team" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.2577381Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.361033ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.264459625Z level=info msg="Executing migration" id="Update uid column values in team" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.264797434Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=338.81µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.269131711Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.270564196Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.431736ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.273676512Z level=info msg="Executing migration" id="Add column external_uid in team" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.278302526Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=4.620684ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.283649688Z level=info msg="Executing migration" id="Add column is_provisioned in team" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.288144349Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.496161ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.293192944Z level=info msg="Executing migration" id="create team member table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.293936892Z level=info msg="Migration successfully executed" id="create team member table" duration=743.708µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.297791147Z level=info msg="Executing migration" id="add index team_member.org_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.299028128Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.236201ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.30843112Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.309809554Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.379504ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.316315254Z level=info msg="Executing migration" id="add index team_member.team_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.317227477Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=912.243µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.32262297Z level=info msg="Executing migration" id="Add column email to team table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.327719406Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.095866ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.332266378Z level=info msg="Executing migration" id="Add column external to team_member table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.336893572Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.604774ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.341888516Z level=info msg="Executing migration" id="Add column permission to team_member table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.346491609Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.602723ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.350754004Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.351640346Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=885.912µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.356697851Z level=info msg="Executing migration" id="create dashboard acl table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.357530442Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=834.881µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.364386111Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.365494048Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.111257ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.370123222Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.37125108Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.127548ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.375169196Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.375947176Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=777.66µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.381692348Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.382447447Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=754.91µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.385567813Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.38625999Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=691.397µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.389513921Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.390468554Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=954.313µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.397286492Z level=info msg="Executing migration" id="add index dashboard_permission" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.398378669Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.092227ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.401273761Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.401767243Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=494.721µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.407189636Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.407442714Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=253.147µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.410258283Z level=info msg="Executing migration" id="create tag table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.41097534Z level=info msg="Migration successfully executed" id="create tag table" duration=716.587µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.413841091Z level=info msg="Executing migration" id="add index tag.key_value" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.414767614Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=925.293µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.468414587Z level=info msg="Executing migration" id="create login attempt table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.469177037Z level=info msg="Migration successfully executed" id="create login attempt table" duration=765.22µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.473036101Z level=info msg="Executing migration" id="add index login_attempt.username" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.473729839Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=693.768µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.478073226Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.47943835Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.364804ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.483971821Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.500918189Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=16.946908ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.504736914Z level=info msg="Executing migration" id="create login_attempt v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.505454002Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=717.368µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.510853314Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.511761817Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=908.183µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.514674579Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.514954196Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=279.707µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.51877885Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.519412876Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=633.646µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.525075276Z level=info msg="Executing migration" id="create user auth table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.526228284Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.152878ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.53210986Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.533493674Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.383044ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.537196034Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.537222195Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=27.441µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.542936406Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.548004641Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.067635ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.551223091Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.556320797Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.119957ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.560267604Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.565520143Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.252519ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.570846075Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.575997802Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.151047ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.580216756Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.581080818Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=863.722µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.586386398Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.591889764Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.502346ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.59740808Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.602552747Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=5.140157ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.606770242Z level=info msg="Executing migration" id="create server_lock table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.608019222Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.248771ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.612866872Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.614349008Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.481596ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.620662134Z level=info msg="Executing migration" id="create user auth token table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.621527245Z level=info msg="Migration successfully executed" id="create user auth token table" duration=864.831µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.626324664Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.627290298Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=965.254µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.632227829Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.63389415Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.666021ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.640259598Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.641856847Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.597099ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.646497661Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.652140951Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.64343ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.656428726Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.657321619Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=892.263µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.663437439Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.673033576Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=9.592967ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.67683763Z level=info msg="Executing migration" id="create cache_data table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.677895727Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.058757ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.681569467Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.682605893Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.036266ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.689723278Z level=info msg="Executing migration" id="create short_url table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.691362648Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.63787ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.696652029Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.697692935Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.040656ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.703465807Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.703486018Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=20.991µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.70926149Z level=info msg="Executing migration" id="delete alert_definition table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.709469865Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=207.855µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.718384996Z level=info msg="Executing migration" id="recreate alert_definition table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.719567314Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.176878ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.724875915Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.726527796Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.650901ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.735047207Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.736855981Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.807084ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.774523461Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.774551822Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=29.881µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.782710313Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.784311813Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.596489ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.796542224Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:37.798053672Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.510999ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.158610656Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.160521013Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.910747ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.236433425Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.23745123Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.017605ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.244164967Z level=info msg="Executing migration" id="Add column paused in alert_definition" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.248313661Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.148274ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.253467879Z level=info msg="Executing migration" id="drop alert_definition table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.254293239Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=824.86µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.257370287Z level=info msg="Executing migration" id="delete alert_definition_version table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.257451489Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=81.422µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.262151365Z level=info msg="Executing migration" id="recreate alert_definition_version table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.263034818Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=882.963µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.269780505Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.270791611Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.010686ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.275331803Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.276290148Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=955.395µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.282450291Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.282469301Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=19.96µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.287177249Z level=info msg="Executing migration" id="drop alert_definition_version table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.288017489Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=840.79µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.295710152Z level=info msg="Executing migration" id="create alert_instance table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.296632235Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=921.423µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.303250089Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.304172512Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=922.103µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.306944571Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.307897706Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=952.604µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.311266739Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.316964311Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.696762ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.323444243Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.325047843Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.604569ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.330029246Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.331212716Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.18276ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.334453797Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.359628664Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=25.169127ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.365077149Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.390040351Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=24.961612ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.414433Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.415619779Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.18635ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.42287255Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.423857164Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=984.314µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.427206428Z level=info msg="Executing migration" id="add current_reason column related to current_state" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.432950651Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.740763ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.439495144Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.445253497Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.759493ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.450143959Z level=info msg="Executing migration" id="create alert_rule table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.451928374Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.786325ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.458120998Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.459185305Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.065846ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.463307317Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.46426137Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=953.823µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.46824887Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.469198324Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=948.994µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.475144332Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.475167962Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=24.2µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.478736262Z level=info msg="Executing migration" id="add column for to alert_rule" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.484904865Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.168023ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.488297369Z level=info msg="Executing migration" id="add column annotations to alert_rule" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.49433252Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.034621ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.501479548Z level=info msg="Executing migration" id="add column labels to alert_rule" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.50797852Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.500522ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.511238231Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.512756239Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.517228ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.51680106Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.518598974Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.797044ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.524225204Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.532378428Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=8.154204ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.5364829Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.540798047Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.314617ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.545847884Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.546859248Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.010894ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.558718584Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.567722498Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.009204ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.570561069Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.574900757Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.339228ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.580681481Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.580699141Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=18.37µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.583076371Z level=info msg="Executing migration" id="create alert_rule_version table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.584064045Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=988.245µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.588183278Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.589719576Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.535688ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.596066354Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.597634624Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.56736ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.603315265Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.603334096Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=19.241µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.607464539Z level=info msg="Executing migration" id="add column for to alert_rule_version" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.613767545Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.301996ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.617665803Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.625482797Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.815404ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.672908708Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.683073322Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=10.139204ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.68618228Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.69061374Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.43037ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.69342676Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.699696486Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.268626ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.729475388Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.729504189Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=29.851µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.733239231Z level=info msg="Executing migration" id=create_alert_configuration_table 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.734514683Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.275002ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.739689022Z level=info msg="Executing migration" id="Add column default in alert_configuration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.747306292Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.62088ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.752709436Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.752729437Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=23.031µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.755455905Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.760395738Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.942023ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.76410573Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.765239259Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.133189ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.770390827Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.77690736Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.515743ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.780717524Z level=info msg="Executing migration" id=create_ngalert_configuration_table 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.781661088Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=943.004µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.785275129Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.786335134Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.059516ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.791533204Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.798043166Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.509362ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.801575474Z level=info msg="Executing migration" id="create provenance_type table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.802537088Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=960.894µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.806513427Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.807636066Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.121849ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.813025779Z level=info msg="Executing migration" id="create alert_image table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.814023514Z level=info msg="Migration successfully executed" id="create alert_image table" duration=996.925µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.820854665Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.822502035Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.647001ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.828412082Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.828439213Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=28.301µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.833499649Z level=info msg="Executing migration" id=create_alert_configuration_history_table 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.835168121Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.667312ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.841698154Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.843417416Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.718992ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.909582645Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.910064587Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.914127349Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.914544999Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=417.02µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.920061736Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.921242906Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.18142ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.924723432Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.932504046Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.779854ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.93587555Z level=info msg="Executing migration" id="create library_element table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.936990738Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.114748ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.94148142Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.942658029Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.17614ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.946801402Z level=info msg="Executing migration" id="create library_element_connection table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.947774497Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=972.555µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.954379831Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.95555459Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.174579ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.959296954Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.960968475Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.667161ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.965008776Z level=info msg="Executing migration" id="increase max description length to 2048" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.965243682Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=236.535µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.971412046Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.971561299Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=150.273µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.975695932Z level=info msg="Executing migration" id="add library_element folder uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.986103021Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.406929ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.990973893Z level=info msg="Executing migration" id="populate library_element folder_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.991466075Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=491.702µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.994716296Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.995897995Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.180889ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:38.999947326Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.000580062Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=632.876µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.00531888Z level=info msg="Executing migration" id="create data_keys table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.007168025Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.849465ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.011132193Z level=info msg="Executing migration" id="create secrets table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.012053936Z level=info msg="Migration successfully executed" id="create secrets table" duration=921.203µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.046783442Z level=info msg="Executing migration" id="rename data_keys name column to id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.086566455Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=39.777102ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.128668553Z level=info msg="Executing migration" id="add name column into data_keys" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.137773757Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=9.104564ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.217654538Z level=info msg="Executing migration" id="copy data_keys id column values into name" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.217957947Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=300.079µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.230867325Z level=info msg="Executing migration" id="rename data_keys name column to label" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.262267709Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=31.401104ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.266987697Z level=info msg="Executing migration" id="rename data_keys id column back to name" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.2971438Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.155734ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.300988945Z level=info msg="Executing migration" id="create kv_store table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.301831056Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=844.171µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.307194138Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.307949717Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=755.529µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.312684723Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.312986051Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=301.028µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.317180174Z level=info msg="Executing migration" id="create permission table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.318513927Z level=info msg="Migration successfully executed" id="create permission table" duration=1.337173ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.326087595Z level=info msg="Executing migration" id="add unique index permission.role_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.327277844Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.18954ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.334210285Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.335317142Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.106607ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.370347316Z level=info msg="Executing migration" id="create role table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.371894655Z level=info msg="Migration successfully executed" id="create role table" duration=1.548179ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.378852387Z level=info msg="Executing migration" id="add column display_name" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.386815943Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.963196ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.418218118Z level=info msg="Executing migration" id="add column group_name" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.423638952Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.421544ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.43007244Z level=info msg="Executing migration" id="add index role.org_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.430832499Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=759.889µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.435371871Z level=info msg="Executing migration" id="add unique index role_org_id_name" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.437329659Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.958948ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.442298692Z level=info msg="Executing migration" id="add index role_org_id_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.443618845Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.323793ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.449405067Z level=info msg="Executing migration" id="create team role table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.450193046Z level=info msg="Migration successfully executed" id="create team role table" duration=785.819µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.457003684Z level=info msg="Executing migration" id="add index team_role.org_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.458255265Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.250521ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.463865404Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.465043393Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.179829ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.468902488Z level=info msg="Executing migration" id="add index team_role.team_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.470682772Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.779854ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.476760442Z level=info msg="Executing migration" id="create user role table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.477698155Z level=info msg="Migration successfully executed" id="create user role table" duration=938.133µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.481279694Z level=info msg="Executing migration" id="add index user_role.org_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.482480514Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.19989ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.486032301Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.48720634Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.173269ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.493606678Z level=info msg="Executing migration" id="add index user_role.user_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.494688594Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.081486ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.498693863Z level=info msg="Executing migration" id="create builtin role table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.500140709Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.446716ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.50503812Z level=info msg="Executing migration" id="add index builtin_role.role_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.506049845Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.011475ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.510019263Z level=info msg="Executing migration" id="add index builtin_role.name" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.511598462Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.577628ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.517260071Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.525854804Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.595233ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.529428862Z level=info msg="Executing migration" id="add index builtin_role.org_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.530181441Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=754.799µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.533886512Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.534648171Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=761.389µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.541186881Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.542787322Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.603381ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.548915703Z level=info msg="Executing migration" id="add unique index role.uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.550968683Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=2.05206ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.555515876Z level=info msg="Executing migration" id="create seed assignment table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.556277634Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=761.448µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.563605125Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.564754973Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.151968ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.570331601Z level=info msg="Executing migration" id="add column hidden to role table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.578934863Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.599612ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.669768794Z level=info msg="Executing migration" id="permission kind migration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.678429869Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.667315ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.682630243Z level=info msg="Executing migration" id="permission attribute migration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.689704117Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.073795ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.69308038Z level=info msg="Executing migration" id="permission identifier migration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.701710463Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.629243ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.706697366Z level=info msg="Executing migration" id="add permission identifier index" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.707991008Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.290242ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.711499304Z level=info msg="Executing migration" id="add permission action scope role_id index" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.713570696Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=2.070432ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.722488576Z level=info msg="Executing migration" id="remove permission role_id action scope index" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.723778537Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.291722ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.729539319Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.737969797Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=8.416668ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.742990892Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.744333615Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.344913ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.751302827Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.752372783Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.070016ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.830507771Z level=info msg="Executing migration" id="create query_history table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.834860398Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=4.352837ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.83899318Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.840127308Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.131578ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.845110062Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.845161483Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=54.331µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.847841439Z level=info msg="Executing migration" id="create query_history_details table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.848750931Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=908.983µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.852689928Z level=info msg="Executing migration" id="rbac disabled migrator" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.852787251Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=97.913µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.856887832Z level=info msg="Executing migration" id="teams permissions migration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.857349303Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=461.241µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.861646259Z level=info msg="Executing migration" id="dashboard permissions" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.862602043Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=956.754µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.866346835Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.867428961Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.081866ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.871174154Z level=info msg="Executing migration" id="drop managed folder create actions" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.871447421Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=272.847µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.876715091Z level=info msg="Executing migration" id="alerting notification permissions" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.877228433Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=511.982µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.880843173Z level=info msg="Executing migration" id="create query_history_star table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.882227787Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.384194ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.887368924Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.888488012Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.118528ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.89371521Z level=info msg="Executing migration" id="add column org_id in query_history_star" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.904156588Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=10.441288ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.90825929Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.908298671Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=39.861µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.911707534Z level=info msg="Executing migration" id="create correlation table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.91272728Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.016856ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.917892017Z level=info msg="Executing migration" id="add index correlations.uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.918996224Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.103967ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.923775542Z level=info msg="Executing migration" id="add index correlations.source_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.925126466Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.350644ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.929585516Z level=info msg="Executing migration" id="add correlation config column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:39.940518326Z level=info msg="Migration successfully executed" id="add correlation config column" duration=10.93275ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.084691274Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.085884623Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.1977ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.228713382Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.230018893Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.307101ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.410799518Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.433737049Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.938321ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.487848547Z level=info msg="Executing migration" id="create correlation v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.488866432Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.019505ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.500315008Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.502095452Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.779854ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.515364012Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.516661495Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.299723ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.52288913Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.523995487Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.105897ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.533881803Z level=info msg="Executing migration" id="copy correlation v1 to v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.534237683Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=355.89µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.540262713Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.541051943Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=789.24µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.547029542Z level=info msg="Executing migration" id="add provisioning column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.561558463Z level=info msg="Migration successfully executed" id="add provisioning column" duration=14.522411ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.569241655Z level=info msg="Executing migration" id="add type column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.577952222Z level=info msg="Migration successfully executed" id="add type column" duration=8.710597ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.615265782Z level=info msg="Executing migration" id="create entity_events table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.616607405Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.341663ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.622544933Z level=info msg="Executing migration" id="create dashboard public config v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.624508811Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.965358ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.630698966Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.631165538Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.635101726Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.635540216Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.641401962Z level=info msg="Executing migration" id="Drop old dashboard public config table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.642667504Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.265122ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.65052236Z level=info msg="Executing migration" id="recreate dashboard public config v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.651778211Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.254231ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.659451972Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.660526719Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.076647ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.676468066Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.677891521Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.424335ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.695197243Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.69629236Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.095807ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.702654649Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.703668514Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.013616ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.711042338Z level=info msg="Executing migration" id="Drop public config table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.711817667Z level=info msg="Migration successfully executed" id="Drop public config table" duration=775.29µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.719722004Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.721664153Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.941549ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.727890047Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.729762184Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.871496ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.735290151Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.736362198Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.071997ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.743897576Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.744950872Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.053046ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.751712931Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.776057847Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.334915ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.780830866Z level=info msg="Executing migration" id="add annotations_enabled column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.78703298Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.201684ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.790687222Z level=info msg="Executing migration" id="add time_selection_enabled column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.799236025Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.548053ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.813149581Z level=info msg="Executing migration" id="delete orphaned public dashboards" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.813546851Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=391.34µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.81912644Z level=info msg="Executing migration" id="add share column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.830661947Z level=info msg="Migration successfully executed" id="add share column" duration=11.534887ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.837816756Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.838022401Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=205.985µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.843293962Z level=info msg="Executing migration" id="create file table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.845360404Z level=info msg="Migration successfully executed" id="create file table" duration=2.065372ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.855888916Z level=info msg="Executing migration" id="file table idx: path natural pk" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.856942842Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.053506ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.863619129Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.864742577Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.125888ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.874334526Z level=info msg="Executing migration" id="create file_meta table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.875303579Z level=info msg="Migration successfully executed" id="create file_meta table" duration=969.243µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.882898009Z level=info msg="Executing migration" id="file table idx: path key" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.884658003Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.758604ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.891273318Z level=info msg="Executing migration" id="set path collation in file table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.891305069Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=28.961µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.89698984Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.897007541Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=18.541µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.95999948Z level=info msg="Executing migration" id="managed permissions migration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.960490872Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=492.002µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.971146858Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.971450215Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=303.067µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.976528062Z level=info msg="Executing migration" id="RBAC action name migrator" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.978603463Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.074621ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.986091359Z level=info msg="Executing migration" id="Add UID column to playlist" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:40.998723934Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=12.632635ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.003380351Z level=info msg="Executing migration" id="Update uid column values in playlist" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.004046407Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=665.556µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.011127661Z level=info msg="Executing migration" id="Add index for uid in playlist" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.012363922Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.236061ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.020715628Z level=info msg="Executing migration" id="update group index for alert rules" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.02117191Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=458.682µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.02686639Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.02725818Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=391.46µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.036712693Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.037480582Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=767.669µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.047947081Z level=info msg="Executing migration" id="add action column to seed_assignment" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.060253193Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=12.311133ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.064977031Z level=info msg="Executing migration" id="add scope column to seed_assignment" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.072307431Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.32833ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.076695069Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.077871269Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.175869ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.08559476Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.166259059Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=80.651789ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.173057817Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.174533704Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.477357ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.187441542Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.189173174Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.731172ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.194814455Z level=info msg="Executing migration" id="add primary key to seed_assigment" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.223447901Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.633466ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.260159076Z level=info msg="Executing migration" id="add origin column to seed_assignment" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.272384748Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=12.226822ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.279599056Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.279855262Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=221.935µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.292698289Z level=info msg="Executing migration" id="prevent seeding OnCall access" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.292954996Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=256.537µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.298961544Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.299295482Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=333.598µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.309956545Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.310347115Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=390.51µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.322071504Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.322457304Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=385.59µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.331483336Z level=info msg="Executing migration" id="create folder table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.332897811Z level=info msg="Migration successfully executed" id="create folder table" duration=1.414195ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.342004686Z level=info msg="Executing migration" id="Add index for parent_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.343159014Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.154248ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.359928768Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.361704762Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.771454ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.37499164Z level=info msg="Executing migration" id="Update folder title length" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.37501409Z level=info msg="Migration successfully executed" id="Update folder title length" duration=23.19µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.379381658Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.380187118Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=803.06µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.394734887Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.395705281Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=970.584µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.401985405Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.402798426Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=812.761µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.409891131Z level=info msg="Executing migration" id="Sync dashboard and folder table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.410217689Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=326.618µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.42364422Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.424209294Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=565.184µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.440612829Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.441917061Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.304912ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.454536733Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.456245185Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.707611ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.470694212Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.47188843Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.195158ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.494765066Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.496443907Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.681202ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.505739316Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.50707872Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.341674ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.512210346Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.513954888Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.743183ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.525648748Z level=info msg="Executing migration" id="create anon_device table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.527310828Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.661771ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.538569936Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.540779621Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.208985ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.604058232Z level=info msg="Executing migration" id="add index anon_device.updated_at" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.605257922Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.19941ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.614817897Z level=info msg="Executing migration" id="create signing_key table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.61653479Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.717773ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.622102518Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.623689667Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.587099ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.630867644Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.632598056Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.731683ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.641183688Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.641575688Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=393.55µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.647700709Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.660334851Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=12.603021ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.663624771Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.664302549Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=680.698µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.668588754Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.668607965Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=15.461µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.672185693Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.673096655Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=910.672µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.678718474Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.678731424Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=13.61µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.682295463Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.683184774Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=889.051µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.687205343Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.688022204Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=816.081µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.69317182Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.69394717Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=774.98µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.711234206Z level=info msg="Executing migration" id="create sso_setting table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.712251692Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.024896ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.715677827Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.71625073Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=573.423µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.720896205Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.721214312Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=319.027µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.725719124Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.72637014Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=651.626µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.733062475Z level=info msg="Executing migration" id="create cloud_migration table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.734244085Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.18356ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.740811886Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.741961924Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.154418ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.749475911Z level=info msg="Executing migration" id="add stack_id column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.759145669Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.647449ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.769406402Z level=info msg="Executing migration" id="add region_slug column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.779394168Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.988136ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.786509064Z level=info msg="Executing migration" id="add cluster_slug column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.795070945Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=8.561391ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.799246348Z level=info msg="Executing migration" id="add migration uid column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.809459891Z level=info msg="Migration successfully executed" id="add migration uid column" duration=10.213703ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.813652024Z level=info msg="Executing migration" id="Update uid column values for migration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.813776377Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=124.343µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.817919319Z level=info msg="Executing migration" id="Add unique index migration_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.820196365Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=2.276206ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.824378658Z level=info msg="Executing migration" id="add migration run uid column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.834762275Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=10.383987ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.840693831Z level=info msg="Executing migration" id="Update uid column values for migration run" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.840942827Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=250.806µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.846078384Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.847098719Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.020005ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.850639506Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.881841206Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=31.19334ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.908956656Z level=info msg="Executing migration" id="create cloud_migration_session v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.91033816Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=1.417825ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.916997453Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.919125656Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=2.127723ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.923531675Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.923944955Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=412.43µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.928363144Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.929266116Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=899.602µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.934183328Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.963339427Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=29.155599ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.96710055Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.967751687Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=650.936µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.970790861Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.971612681Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=821.63µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.976232996Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.976656526Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=425.29µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.980568833Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.981391162Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=820.009µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.988079358Z level=info msg="Executing migration" id="add snapshot upload_url column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:41.999676963Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=11.598336ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.005080437Z level=info msg="Executing migration" id="add snapshot status column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.014556931Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=9.476194ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.019514274Z level=info msg="Executing migration" id="add snapshot local_directory column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.029045988Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=9.511474ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.035268752Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.042249585Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=6.979692ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.046405427Z level=info msg="Executing migration" id="add snapshot encryption_key column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.055795868Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=9.389721ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.062217337Z level=info msg="Executing migration" id="add snapshot error_string column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.071730732Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=9.512415ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.077244228Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.078195741Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=950.733µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.081940194Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.119865049Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=37.925016ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.124419581Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.133365773Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=8.941932ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.138081599Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.149231804Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=11.147005ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.154270519Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.162647015Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=8.375916ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.169128025Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.177924792Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=8.791217ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.182977806Z level=info msg="Executing migration" id="increase resource_uid column length" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.182994267Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=17.251µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.187237782Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.187250792Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=15.271µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.224759438Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.232022837Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=7.263209ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.236413665Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.243289775Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.875449ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.249220732Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.249545409Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=324.337µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.258427728Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.258593352Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=165.484µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.263654517Z level=info msg="Executing migration" id="add record column to alert_rule table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.270457965Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=6.803138ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.274244418Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.281084477Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=6.839719ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.285010624Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.291955716Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=6.943002ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.296278942Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.303397528Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=7.117806ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.309042067Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.309476958Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=434.541µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.315133138Z level=info msg="Executing migration" id="add metadata column to alert_rule table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.324921089Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=9.787021ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.330443026Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.339259603Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=8.815577ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.343521468Z level=info msg="Executing migration" id="delete orphaned service account permissions" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.343937968Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=415.37µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.350386608Z level=info msg="Executing migration" id="adding action set permissions" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.351159466Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=772.708µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.356839116Z level=info msg="Executing migration" id="create user_external_session table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.358142248Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.302962ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.364839974Z level=info msg="Executing migration" id="increase name_id column length to 1024" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.364858894Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=19.71µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.369463308Z level=info msg="Executing migration" id="increase session_id column length to 1024" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.369482458Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=20.39µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.374360809Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.374720298Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=359.279µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.382729896Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.392327763Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=9.596916ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.398392582Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.405559049Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=7.165097ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.40881409Z level=info msg="Executing migration" id="add alert_rule_state table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.409805953Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=991.313µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.416383446Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.417682728Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.300522ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.421888982Z level=info msg="Executing migration" id="add guid column to alert_rule table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.432045802Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=10.15611ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.441292801Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.450742884Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=9.449393ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.456062295Z level=info msg="Executing migration" id="cleanup alert_rule_version table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.456082745Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.45628872Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.456304821Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=242.635µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.460004812Z level=info msg="Executing migration" id="populate rule guid in alert rule table" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.460561806Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=556.214µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.46395208Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.465101188Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.148868ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.470744507Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.472129172Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.384395ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.478203831Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.479591365Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.386734ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.486341202Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.487492291Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.150659ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.491897809Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.501749162Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=9.828533ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.52762398Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.537779472Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=10.155822ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.540876888Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.547805499Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=6.928011ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.551298625Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.561559708Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=10.259023ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.56607737Z level=info msg="Executing migration" id="remove the datasources:drilldown action" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.566318585Z level=info msg="Removed 0 datasources:drilldown permissions" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.566337626Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=260.546µs 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.571668458Z level=info msg="Executing migration" id="remove title in folder unique index" 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.57298198Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.313272ms 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.576105157Z level=info msg="migrations completed" performed=654 skipped=0 duration=7.258906872s 18:36:07 grafana | logger=migrator t=2025-06-15T18:32:42.576767524Z level=info msg="Unlocking database" 18:36:07 grafana | logger=sqlstore t=2025-06-15T18:32:42.592022549Z level=info msg="Created default admin" user=admin 18:36:07 grafana | logger=sqlstore t=2025-06-15T18:32:42.592244326Z level=info msg="Created default organization" 18:36:07 grafana | logger=secrets t=2025-06-15T18:32:42.597929846Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 18:36:07 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-15T18:32:42.678127424Z level=info msg="Restored cache from database" duration=429.59µs 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.687598708Z level=info msg="Locking database" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.687613509Z level=info msg="Starting DB migrations" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.695291588Z level=info msg="Executing migration" id="create resource_migration_log table" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.696216491Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=924.173µs 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.701378349Z level=info msg="Executing migration" id="Initialize resource tables" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.701418Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=43.101µs 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.705174822Z level=info msg="Executing migration" id="drop table resource" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.705259054Z level=info msg="Migration successfully executed" id="drop table resource" duration=85.983µs 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.707897269Z level=info msg="Executing migration" id="create table resource" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.709037057Z level=info msg="Migration successfully executed" id="create table resource" duration=1.139018ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.712904273Z level=info msg="Executing migration" id="create table resource, index: 0" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.714137273Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.23246ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.719527025Z level=info msg="Executing migration" id="drop table resource_history" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.719796533Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=268.728µs 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.724247393Z level=info msg="Executing migration" id="create table resource_history" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.726483267Z level=info msg="Migration successfully executed" id="create table resource_history" duration=2.255815ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.730630259Z level=info msg="Executing migration" id="create table resource_history, index: 0" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.731919591Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.288972ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.737876929Z level=info msg="Executing migration" id="create table resource_history, index: 1" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.739021127Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.143677ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.743328043Z level=info msg="Executing migration" id="drop table resource_version" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.743404046Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=76.462µs 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.750267494Z level=info msg="Executing migration" id="create table resource_version" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.752259344Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.99107ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.75737827Z level=info msg="Executing migration" id="create table resource_version, index: 0" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.75863071Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.25199ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.763321677Z level=info msg="Executing migration" id="drop table resource_blob" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.763497891Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=175.624µs 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.771032927Z level=info msg="Executing migration" id="create table resource_blob" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.773215991Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=2.177245ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.778823889Z level=info msg="Executing migration" id="create table resource_blob, index: 0" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.781530696Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=2.707257ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.78656481Z level=info msg="Executing migration" id="create table resource_blob, index: 1" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.787814151Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.248541ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.823589024Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.837061116Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=13.474162ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.843882114Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.855565113Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=11.680659ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.862195696Z level=info msg="Executing migration" id="Add index to resource_history for polling" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.863798616Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.6027ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.873390562Z level=info msg="Executing migration" id="Add index to resource for loading" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.874873679Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.482427ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.882367944Z level=info msg="Executing migration" id="Add column folder in resource_history" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.896513863Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=14.144409ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.905517995Z level=info msg="Executing migration" id="Add column folder in resource" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.921566961Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=16.048336ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.926725548Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" 18:36:07 grafana | logger=deletion-marker-migrator t=2025-06-15T18:32:42.926902273Z level=info msg="finding any deletion markers" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.927342744Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=616.976µs 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.933351212Z level=info msg="Executing migration" id="Add index to resource_history for get trash" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.934622583Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.270151ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.938485359Z level=info msg="Executing migration" id="Add generation to resource history" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.954324419Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=15.83302ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.958016861Z level=info msg="Executing migration" id="Add generation index to resource history" 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.959580799Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=1.561888ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.963456225Z level=info msg="migrations completed" performed=26 skipped=0 duration=268.202538ms 18:36:07 grafana | logger=resource-migrator t=2025-06-15T18:32:42.964016308Z level=info msg="Unlocking database" 18:36:07 grafana | t=2025-06-15T18:32:42.964233314Z level=info caller=logger.go:214 time=2025-06-15T18:32:42.964218583Z msg="Using channel notifier" logger=sql-resource-server 18:36:07 grafana | logger=plugin.store t=2025-06-15T18:32:42.972545969Z level=info msg="Loading plugins..." 18:36:07 grafana | logger=plugins.registration t=2025-06-15T18:32:43.009619264Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" 18:36:07 grafana | logger=plugins.initialization t=2025-06-15T18:32:43.009642005Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" 18:36:07 grafana | logger=plugin.store t=2025-06-15T18:32:43.009696476Z level=info msg="Plugins loaded" count=53 duration=37.151297ms 18:36:07 grafana | logger=query_data t=2025-06-15T18:32:43.021131868Z level=info msg="Query Service initialization" 18:36:07 grafana | logger=live.push_http t=2025-06-15T18:32:43.026939491Z level=info msg="Live Push Gateway initialization" 18:36:07 grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-15T18:32:43.042185337Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 18:36:07 grafana | logger=ngalert t=2025-06-15T18:32:43.050702278Z level=info msg="Using simple database alert instance store" 18:36:07 grafana | logger=ngalert.state.manager.persist t=2025-06-15T18:32:43.050727708Z level=info msg="Using sync state persister" 18:36:07 grafana | logger=infra.usagestats.collector t=2025-06-15T18:32:43.053092647Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 18:36:07 grafana | logger=grafanaStorageLogger t=2025-06-15T18:32:43.053403154Z level=info msg="Storage starting" 18:36:07 grafana | logger=ngalert.state.manager t=2025-06-15T18:32:43.055544187Z level=info msg="Warming state cache for startup" 18:36:07 grafana | logger=plugin.backgroundinstaller t=2025-06-15T18:32:43.05729442Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 18:36:07 grafana | logger=provisioning.datasources t=2025-06-15T18:32:43.064771035Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 18:36:07 grafana | logger=ngalert.multiorg.alertmanager t=2025-06-15T18:32:43.067140423Z level=info msg="Starting MultiOrg Alertmanager" 18:36:07 grafana | logger=http.server t=2025-06-15T18:32:43.072226638Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 18:36:07 grafana | logger=sqlstore.transactions t=2025-06-15T18:32:43.082465632Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 18:36:07 grafana | logger=ngalert.state.manager t=2025-06-15T18:32:43.15454863Z level=info msg="State cache has been initialized" states=0 duration=99.003343ms 18:36:07 grafana | logger=ngalert.scheduler t=2025-06-15T18:32:43.154601311Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 18:36:07 grafana | logger=ticker t=2025-06-15T18:32:43.154674863Z level=info msg=starting first_tick=2025-06-15T18:32:50Z 18:36:07 grafana | logger=plugins.update.checker t=2025-06-15T18:32:43.164890056Z level=info msg="Update check succeeded" duration=98.368738ms 18:36:07 grafana | logger=grafana.update.checker t=2025-06-15T18:32:43.167176811Z level=info msg="Update check succeeded" duration=100.002627ms 18:36:07 grafana | logger=sqlstore.transactions t=2025-06-15T18:32:43.175104147Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 18:36:07 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-15T18:32:43.238197584Z level=info msg="Patterns update finished" duration=183.920978ms 18:36:07 grafana | logger=provisioning.alerting t=2025-06-15T18:32:43.28139046Z level=info msg="starting to provision alerting" 18:36:07 grafana | logger=provisioning.alerting t=2025-06-15T18:32:43.28140983Z level=info msg="finished to provision alerting" 18:36:07 grafana | logger=provisioning.dashboard t=2025-06-15T18:32:43.282666042Z level=info msg="starting to provision dashboards" 18:36:07 grafana | logger=grafana-apiserver t=2025-06-15T18:32:43.424749527Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" 18:36:07 grafana | logger=grafana-apiserver t=2025-06-15T18:32:43.426803498Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" 18:36:07 grafana | logger=grafana-apiserver t=2025-06-15T18:32:43.428745476Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" 18:36:07 grafana | logger=grafana-apiserver t=2025-06-15T18:32:43.429517045Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" 18:36:07 grafana | logger=grafana-apiserver t=2025-06-15T18:32:43.430208422Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 18:36:07 grafana | logger=grafana-apiserver t=2025-06-15T18:32:43.433043552Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" 18:36:07 grafana | logger=grafana-apiserver t=2025-06-15T18:32:43.433604326Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" 18:36:07 grafana | logger=grafana-apiserver t=2025-06-15T18:32:43.434031356Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" 18:36:07 grafana | logger=grafana-apiserver t=2025-06-15T18:32:43.434551549Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 18:36:07 grafana | logger=app-registry t=2025-06-15T18:32:43.491698939Z level=info msg="app registry initialized" 18:36:07 grafana | logger=plugin.installer t=2025-06-15T18:32:43.513139458Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 18:36:07 grafana | logger=installer.fs t=2025-06-15T18:32:43.580645314Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" 18:36:07 grafana | logger=plugins.registration t=2025-06-15T18:32:43.607485266Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app 18:36:07 grafana | logger=plugin.backgroundinstaller t=2025-06-15T18:32:43.607506587Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=550.193166ms 18:36:07 grafana | logger=plugin.backgroundinstaller t=2025-06-15T18:32:43.607526377Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 18:36:07 grafana | logger=plugin.installer t=2025-06-15T18:32:43.900448095Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 18:36:07 grafana | logger=provisioning.dashboard t=2025-06-15T18:32:44.028303536Z level=info msg="finished to provision dashboards" 18:36:07 grafana | logger=installer.fs t=2025-06-15T18:32:44.034002368Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" 18:36:07 grafana | logger=plugins.registration t=2025-06-15T18:32:44.057578156Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app 18:36:07 grafana | logger=plugin.backgroundinstaller t=2025-06-15T18:32:44.057599596Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=450.068589ms 18:36:07 grafana | logger=plugin.backgroundinstaller t=2025-06-15T18:32:44.057617977Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 18:36:07 grafana | logger=plugin.installer t=2025-06-15T18:32:44.23634031Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 18:36:07 grafana | logger=installer.fs t=2025-06-15T18:32:44.290619372Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" 18:36:07 grafana | logger=plugins.registration t=2025-06-15T18:32:44.305913443Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app 18:36:07 grafana | logger=plugin.backgroundinstaller t=2025-06-15T18:32:44.305934564Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=248.312717ms 18:36:07 grafana | logger=plugin.backgroundinstaller t=2025-06-15T18:32:44.305954104Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 18:36:07 grafana | logger=plugin.installer t=2025-06-15T18:32:44.52778556Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 18:36:07 grafana | logger=installer.fs t=2025-06-15T18:32:44.590117833Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" 18:36:07 grafana | logger=plugins.registration t=2025-06-15T18:32:44.60603469Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app 18:36:07 grafana | logger=plugin.backgroundinstaller t=2025-06-15T18:32:44.60605983Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=300.101536ms 18:36:07 grafana | logger=infra.usagestats t=2025-06-15T18:33:57.070954747Z level=info msg="Usage stats are ready to report" 18:36:07 kafka | ===> User 18:36:07 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 18:36:07 kafka | ===> Configuring ... 18:36:07 kafka | Running in Zookeeper mode... 18:36:07 kafka | ===> Running preflight checks ... 18:36:07 kafka | ===> Check if /var/lib/kafka/data is writable ... 18:36:07 kafka | ===> Check if Zookeeper is healthy ... 18:36:07 kafka | [2025-06-15 18:32:39,365] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,365] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,365] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,365] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,365] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,365] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,365] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,365] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,365] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,366] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,366] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,366] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,366] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,366] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,366] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,366] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,366] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,366] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,368] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,371] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 18:36:07 kafka | [2025-06-15 18:32:39,375] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 18:36:07 kafka | [2025-06-15 18:32:39,382] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 18:36:07 kafka | [2025-06-15 18:32:39,404] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 18:36:07 kafka | [2025-06-15 18:32:39,404] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 18:36:07 kafka | [2025-06-15 18:32:39,411] INFO Socket connection established, initiating session, client: /172.17.0.6:35102, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 18:36:07 kafka | [2025-06-15 18:32:39,430] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000263930000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 18:36:07 kafka | [2025-06-15 18:32:39,549] INFO Session: 0x100000263930000 closed (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:39,549] INFO EventThread shut down for session: 0x100000263930000 (org.apache.zookeeper.ClientCnxn) 18:36:07 kafka | Using log4j config /etc/kafka/log4j.properties 18:36:07 kafka | ===> Launching ... 18:36:07 kafka | ===> Launching kafka ... 18:36:07 kafka | [2025-06-15 18:32:40,189] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 18:36:07 kafka | [2025-06-15 18:32:40,447] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 18:36:07 kafka | [2025-06-15 18:32:40,521] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 18:36:07 kafka | [2025-06-15 18:32:40,522] INFO starting (kafka.server.KafkaServer) 18:36:07 kafka | [2025-06-15 18:32:40,523] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 18:36:07 kafka | [2025-06-15 18:32:40,534] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 18:36:07 kafka | [2025-06-15 18:32:40,538] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,538] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,538] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,538] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,538] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,539] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,539] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,539] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,539] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,539] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,539] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,539] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,539] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,539] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,539] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,539] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,539] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,539] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,541] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) 18:36:07 kafka | [2025-06-15 18:32:40,544] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 18:36:07 kafka | [2025-06-15 18:32:40,549] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 18:36:07 kafka | [2025-06-15 18:32:40,551] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 18:36:07 kafka | [2025-06-15 18:32:40,554] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 18:36:07 kafka | [2025-06-15 18:32:40,559] INFO Socket connection established, initiating session, client: /172.17.0.6:35104, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 18:36:07 kafka | [2025-06-15 18:32:40,568] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000263930001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 18:36:07 kafka | [2025-06-15 18:32:40,573] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 18:36:07 kafka | [2025-06-15 18:32:40,880] INFO Cluster ID = y4Rm7-C7SZiMksPpYccgvw (kafka.server.KafkaServer) 18:36:07 kafka | [2025-06-15 18:32:40,883] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 18:36:07 kafka | [2025-06-15 18:32:40,937] INFO KafkaConfig values: 18:36:07 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 18:36:07 kafka | alter.config.policy.class.name = null 18:36:07 kafka | alter.log.dirs.replication.quota.window.num = 11 18:36:07 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 18:36:07 kafka | authorizer.class.name = 18:36:07 kafka | auto.create.topics.enable = true 18:36:07 kafka | auto.include.jmx.reporter = true 18:36:07 kafka | auto.leader.rebalance.enable = true 18:36:07 kafka | background.threads = 10 18:36:07 kafka | broker.heartbeat.interval.ms = 2000 18:36:07 kafka | broker.id = 1 18:36:07 kafka | broker.id.generation.enable = true 18:36:07 kafka | broker.rack = null 18:36:07 kafka | broker.session.timeout.ms = 9000 18:36:07 kafka | client.quota.callback.class = null 18:36:07 kafka | compression.type = producer 18:36:07 kafka | connection.failed.authentication.delay.ms = 100 18:36:07 kafka | connections.max.idle.ms = 600000 18:36:07 kafka | connections.max.reauth.ms = 0 18:36:07 kafka | control.plane.listener.name = null 18:36:07 kafka | controlled.shutdown.enable = true 18:36:07 kafka | controlled.shutdown.max.retries = 3 18:36:07 kafka | controlled.shutdown.retry.backoff.ms = 5000 18:36:07 kafka | controller.listener.names = null 18:36:07 kafka | controller.quorum.append.linger.ms = 25 18:36:07 kafka | controller.quorum.election.backoff.max.ms = 1000 18:36:07 kafka | controller.quorum.election.timeout.ms = 1000 18:36:07 kafka | controller.quorum.fetch.timeout.ms = 2000 18:36:07 kafka | controller.quorum.request.timeout.ms = 2000 18:36:07 kafka | controller.quorum.retry.backoff.ms = 20 18:36:07 kafka | controller.quorum.voters = [] 18:36:07 kafka | controller.quota.window.num = 11 18:36:07 kafka | controller.quota.window.size.seconds = 1 18:36:07 kafka | controller.socket.timeout.ms = 30000 18:36:07 kafka | create.topic.policy.class.name = null 18:36:07 kafka | default.replication.factor = 1 18:36:07 kafka | delegation.token.expiry.check.interval.ms = 3600000 18:36:07 kafka | delegation.token.expiry.time.ms = 86400000 18:36:07 kafka | delegation.token.master.key = null 18:36:07 kafka | delegation.token.max.lifetime.ms = 604800000 18:36:07 kafka | delegation.token.secret.key = null 18:36:07 kafka | delete.records.purgatory.purge.interval.requests = 1 18:36:07 kafka | delete.topic.enable = true 18:36:07 kafka | early.start.listeners = null 18:36:07 kafka | fetch.max.bytes = 57671680 18:36:07 kafka | fetch.purgatory.purge.interval.requests = 1000 18:36:07 kafka | group.initial.rebalance.delay.ms = 3000 18:36:07 kafka | group.max.session.timeout.ms = 1800000 18:36:07 kafka | group.max.size = 2147483647 18:36:07 kafka | group.min.session.timeout.ms = 6000 18:36:07 kafka | initial.broker.registration.timeout.ms = 60000 18:36:07 kafka | inter.broker.listener.name = PLAINTEXT 18:36:07 kafka | inter.broker.protocol.version = 3.4-IV0 18:36:07 kafka | kafka.metrics.polling.interval.secs = 10 18:36:07 kafka | kafka.metrics.reporters = [] 18:36:07 kafka | leader.imbalance.check.interval.seconds = 300 18:36:07 kafka | leader.imbalance.per.broker.percentage = 10 18:36:07 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 18:36:07 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 18:36:07 kafka | log.cleaner.backoff.ms = 15000 18:36:07 kafka | log.cleaner.dedupe.buffer.size = 134217728 18:36:07 kafka | log.cleaner.delete.retention.ms = 86400000 18:36:07 kafka | log.cleaner.enable = true 18:36:07 kafka | log.cleaner.io.buffer.load.factor = 0.9 18:36:07 kafka | log.cleaner.io.buffer.size = 524288 18:36:07 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 18:36:07 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 18:36:07 kafka | log.cleaner.min.cleanable.ratio = 0.5 18:36:07 kafka | log.cleaner.min.compaction.lag.ms = 0 18:36:07 kafka | log.cleaner.threads = 1 18:36:07 kafka | log.cleanup.policy = [delete] 18:36:07 kafka | log.dir = /tmp/kafka-logs 18:36:07 kafka | log.dirs = /var/lib/kafka/data 18:36:07 kafka | log.flush.interval.messages = 9223372036854775807 18:36:07 kafka | log.flush.interval.ms = null 18:36:07 kafka | log.flush.offset.checkpoint.interval.ms = 60000 18:36:07 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 18:36:07 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 18:36:07 kafka | log.index.interval.bytes = 4096 18:36:07 kafka | log.index.size.max.bytes = 10485760 18:36:07 kafka | log.message.downconversion.enable = true 18:36:07 kafka | log.message.format.version = 3.0-IV1 18:36:07 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 18:36:07 kafka | log.message.timestamp.type = CreateTime 18:36:07 kafka | log.preallocate = false 18:36:07 kafka | log.retention.bytes = -1 18:36:07 kafka | log.retention.check.interval.ms = 300000 18:36:07 kafka | log.retention.hours = 168 18:36:07 kafka | log.retention.minutes = null 18:36:07 kafka | log.retention.ms = null 18:36:07 kafka | log.roll.hours = 168 18:36:07 kafka | log.roll.jitter.hours = 0 18:36:07 kafka | log.roll.jitter.ms = null 18:36:07 kafka | log.roll.ms = null 18:36:07 kafka | log.segment.bytes = 1073741824 18:36:07 kafka | log.segment.delete.delay.ms = 60000 18:36:07 kafka | max.connection.creation.rate = 2147483647 18:36:07 kafka | max.connections = 2147483647 18:36:07 kafka | max.connections.per.ip = 2147483647 18:36:07 kafka | max.connections.per.ip.overrides = 18:36:07 kafka | max.incremental.fetch.session.cache.slots = 1000 18:36:07 kafka | message.max.bytes = 1048588 18:36:07 kafka | metadata.log.dir = null 18:36:07 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 18:36:07 kafka | metadata.log.max.snapshot.interval.ms = 3600000 18:36:07 kafka | metadata.log.segment.bytes = 1073741824 18:36:07 kafka | metadata.log.segment.min.bytes = 8388608 18:36:07 kafka | metadata.log.segment.ms = 604800000 18:36:07 kafka | metadata.max.idle.interval.ms = 500 18:36:07 kafka | metadata.max.retention.bytes = 104857600 18:36:07 kafka | metadata.max.retention.ms = 604800000 18:36:07 kafka | metric.reporters = [] 18:36:07 kafka | metrics.num.samples = 2 18:36:07 kafka | metrics.recording.level = INFO 18:36:07 kafka | metrics.sample.window.ms = 30000 18:36:07 kafka | min.insync.replicas = 1 18:36:07 kafka | node.id = 1 18:36:07 kafka | num.io.threads = 8 18:36:07 kafka | num.network.threads = 3 18:36:07 kafka | num.partitions = 1 18:36:07 kafka | num.recovery.threads.per.data.dir = 1 18:36:07 kafka | num.replica.alter.log.dirs.threads = null 18:36:07 kafka | num.replica.fetchers = 1 18:36:07 kafka | offset.metadata.max.bytes = 4096 18:36:07 kafka | offsets.commit.required.acks = -1 18:36:07 kafka | offsets.commit.timeout.ms = 5000 18:36:07 kafka | offsets.load.buffer.size = 5242880 18:36:07 kafka | offsets.retention.check.interval.ms = 600000 18:36:07 kafka | offsets.retention.minutes = 10080 18:36:07 kafka | offsets.topic.compression.codec = 0 18:36:07 kafka | offsets.topic.num.partitions = 50 18:36:07 kafka | offsets.topic.replication.factor = 1 18:36:07 kafka | offsets.topic.segment.bytes = 104857600 18:36:07 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 18:36:07 kafka | password.encoder.iterations = 4096 18:36:07 kafka | password.encoder.key.length = 128 18:36:07 kafka | password.encoder.keyfactory.algorithm = null 18:36:07 kafka | password.encoder.old.secret = null 18:36:07 kafka | password.encoder.secret = null 18:36:07 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 18:36:07 kafka | process.roles = [] 18:36:07 kafka | producer.id.expiration.check.interval.ms = 600000 18:36:07 kafka | producer.id.expiration.ms = 86400000 18:36:07 kafka | producer.purgatory.purge.interval.requests = 1000 18:36:07 kafka | queued.max.request.bytes = -1 18:36:07 kafka | queued.max.requests = 500 18:36:07 kafka | quota.window.num = 11 18:36:07 kafka | quota.window.size.seconds = 1 18:36:07 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 18:36:07 kafka | remote.log.manager.task.interval.ms = 30000 18:36:07 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 18:36:07 kafka | remote.log.manager.task.retry.backoff.ms = 500 18:36:07 kafka | remote.log.manager.task.retry.jitter = 0.2 18:36:07 kafka | remote.log.manager.thread.pool.size = 10 18:36:07 kafka | remote.log.metadata.manager.class.name = null 18:36:07 kafka | remote.log.metadata.manager.class.path = null 18:36:07 kafka | remote.log.metadata.manager.impl.prefix = null 18:36:07 kafka | remote.log.metadata.manager.listener.name = null 18:36:07 kafka | remote.log.reader.max.pending.tasks = 100 18:36:07 kafka | remote.log.reader.threads = 10 18:36:07 kafka | remote.log.storage.manager.class.name = null 18:36:07 kafka | remote.log.storage.manager.class.path = null 18:36:07 kafka | remote.log.storage.manager.impl.prefix = null 18:36:07 kafka | remote.log.storage.system.enable = false 18:36:07 kafka | replica.fetch.backoff.ms = 1000 18:36:07 kafka | replica.fetch.max.bytes = 1048576 18:36:07 kafka | replica.fetch.min.bytes = 1 18:36:07 kafka | replica.fetch.response.max.bytes = 10485760 18:36:07 kafka | replica.fetch.wait.max.ms = 500 18:36:07 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 18:36:07 kafka | replica.lag.time.max.ms = 30000 18:36:07 kafka | replica.selector.class = null 18:36:07 kafka | replica.socket.receive.buffer.bytes = 65536 18:36:07 kafka | replica.socket.timeout.ms = 30000 18:36:07 kafka | replication.quota.window.num = 11 18:36:07 kafka | replication.quota.window.size.seconds = 1 18:36:07 kafka | request.timeout.ms = 30000 18:36:07 kafka | reserved.broker.max.id = 1000 18:36:07 kafka | sasl.client.callback.handler.class = null 18:36:07 kafka | sasl.enabled.mechanisms = [GSSAPI] 18:36:07 kafka | sasl.jaas.config = null 18:36:07 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:07 kafka | sasl.kerberos.min.time.before.relogin = 60000 18:36:07 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 18:36:07 kafka | sasl.kerberos.service.name = null 18:36:07 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:07 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:07 kafka | sasl.login.callback.handler.class = null 18:36:07 kafka | sasl.login.class = null 18:36:07 kafka | sasl.login.connect.timeout.ms = null 18:36:07 kafka | sasl.login.read.timeout.ms = null 18:36:07 kafka | sasl.login.refresh.buffer.seconds = 300 18:36:07 kafka | sasl.login.refresh.min.period.seconds = 60 18:36:07 kafka | sasl.login.refresh.window.factor = 0.8 18:36:07 kafka | sasl.login.refresh.window.jitter = 0.05 18:36:07 kafka | sasl.login.retry.backoff.max.ms = 10000 18:36:07 kafka | sasl.login.retry.backoff.ms = 100 18:36:07 kafka | sasl.mechanism.controller.protocol = GSSAPI 18:36:07 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 18:36:07 kafka | sasl.oauthbearer.clock.skew.seconds = 30 18:36:07 kafka | sasl.oauthbearer.expected.audience = null 18:36:07 kafka | sasl.oauthbearer.expected.issuer = null 18:36:07 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:07 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:07 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:07 kafka | sasl.oauthbearer.jwks.endpoint.url = null 18:36:07 kafka | sasl.oauthbearer.scope.claim.name = scope 18:36:07 kafka | sasl.oauthbearer.sub.claim.name = sub 18:36:07 kafka | sasl.oauthbearer.token.endpoint.url = null 18:36:07 kafka | sasl.server.callback.handler.class = null 18:36:07 kafka | sasl.server.max.receive.size = 524288 18:36:07 kafka | security.inter.broker.protocol = PLAINTEXT 18:36:07 kafka | security.providers = null 18:36:07 kafka | socket.connection.setup.timeout.max.ms = 30000 18:36:07 kafka | socket.connection.setup.timeout.ms = 10000 18:36:07 kafka | socket.listen.backlog.size = 50 18:36:07 kafka | socket.receive.buffer.bytes = 102400 18:36:07 kafka | socket.request.max.bytes = 104857600 18:36:07 kafka | socket.send.buffer.bytes = 102400 18:36:07 kafka | ssl.cipher.suites = [] 18:36:07 kafka | ssl.client.auth = none 18:36:07 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:07 kafka | ssl.endpoint.identification.algorithm = https 18:36:07 kafka | ssl.engine.factory.class = null 18:36:07 kafka | ssl.key.password = null 18:36:07 kafka | ssl.keymanager.algorithm = SunX509 18:36:07 kafka | ssl.keystore.certificate.chain = null 18:36:07 kafka | ssl.keystore.key = null 18:36:07 kafka | ssl.keystore.location = null 18:36:07 kafka | ssl.keystore.password = null 18:36:07 kafka | ssl.keystore.type = JKS 18:36:07 kafka | ssl.principal.mapping.rules = DEFAULT 18:36:07 kafka | ssl.protocol = TLSv1.3 18:36:07 kafka | ssl.provider = null 18:36:07 kafka | ssl.secure.random.implementation = null 18:36:07 kafka | ssl.trustmanager.algorithm = PKIX 18:36:07 kafka | ssl.truststore.certificates = null 18:36:07 kafka | ssl.truststore.location = null 18:36:07 kafka | ssl.truststore.password = null 18:36:07 kafka | ssl.truststore.type = JKS 18:36:07 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 18:36:07 kafka | transaction.max.timeout.ms = 900000 18:36:07 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 18:36:07 kafka | transaction.state.log.load.buffer.size = 5242880 18:36:07 kafka | transaction.state.log.min.isr = 2 18:36:07 kafka | transaction.state.log.num.partitions = 50 18:36:07 kafka | transaction.state.log.replication.factor = 3 18:36:07 kafka | transaction.state.log.segment.bytes = 104857600 18:36:07 kafka | transactional.id.expiration.ms = 604800000 18:36:07 kafka | unclean.leader.election.enable = false 18:36:07 kafka | zookeeper.clientCnxnSocket = null 18:36:07 kafka | zookeeper.connect = zookeeper:2181 18:36:07 kafka | zookeeper.connection.timeout.ms = null 18:36:07 kafka | zookeeper.max.in.flight.requests = 10 18:36:07 kafka | zookeeper.metadata.migration.enable = false 18:36:07 kafka | zookeeper.session.timeout.ms = 18000 18:36:07 kafka | zookeeper.set.acl = false 18:36:07 kafka | zookeeper.ssl.cipher.suites = null 18:36:07 kafka | zookeeper.ssl.client.enable = false 18:36:07 kafka | zookeeper.ssl.crl.enable = false 18:36:07 kafka | zookeeper.ssl.enabled.protocols = null 18:36:07 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 18:36:07 kafka | zookeeper.ssl.keystore.location = null 18:36:07 kafka | zookeeper.ssl.keystore.password = null 18:36:07 kafka | zookeeper.ssl.keystore.type = null 18:36:07 kafka | zookeeper.ssl.ocsp.enable = false 18:36:07 kafka | zookeeper.ssl.protocol = TLSv1.2 18:36:07 kafka | zookeeper.ssl.truststore.location = null 18:36:07 kafka | zookeeper.ssl.truststore.password = null 18:36:07 kafka | zookeeper.ssl.truststore.type = null 18:36:07 kafka | (kafka.server.KafkaConfig) 18:36:07 kafka | [2025-06-15 18:32:40,967] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 18:36:07 kafka | [2025-06-15 18:32:40,968] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 18:36:07 kafka | [2025-06-15 18:32:40,976] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 18:36:07 kafka | [2025-06-15 18:32:40,979] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 18:36:07 kafka | [2025-06-15 18:32:41,010] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:32:41,013] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:32:41,025] INFO Loaded 0 logs in 15ms. (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:32:41,026] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:32:41,028] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:32:41,038] INFO Starting the log cleaner (kafka.log.LogCleaner) 18:36:07 kafka | [2025-06-15 18:32:41,080] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) 18:36:07 kafka | [2025-06-15 18:32:41,099] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 18:36:07 kafka | [2025-06-15 18:32:41,114] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 18:36:07 kafka | [2025-06-15 18:32:41,169] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) 18:36:07 kafka | [2025-06-15 18:32:41,507] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 18:36:07 kafka | [2025-06-15 18:32:41,511] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 18:36:07 kafka | [2025-06-15 18:32:41,536] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 18:36:07 kafka | [2025-06-15 18:32:41,537] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 18:36:07 kafka | [2025-06-15 18:32:41,537] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 18:36:07 kafka | [2025-06-15 18:32:41,543] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 18:36:07 kafka | [2025-06-15 18:32:41,548] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) 18:36:07 kafka | [2025-06-15 18:32:41,567] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:07 kafka | [2025-06-15 18:32:41,569] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:07 kafka | [2025-06-15 18:32:41,570] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:07 kafka | [2025-06-15 18:32:41,572] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:07 kafka | [2025-06-15 18:32:41,592] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 18:36:07 kafka | [2025-06-15 18:32:41,616] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 18:36:07 kafka | [2025-06-15 18:32:41,638] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750012361627,1750012361627,1,0,0,72057604298440705,258,0,27 18:36:07 kafka | (kafka.zk.KafkaZkClient) 18:36:07 kafka | [2025-06-15 18:32:41,640] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 18:36:07 kafka | [2025-06-15 18:32:41,706] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 18:36:07 kafka | [2025-06-15 18:32:41,721] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:07 kafka | [2025-06-15 18:32:41,726] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:07 kafka | [2025-06-15 18:32:41,727] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:07 kafka | [2025-06-15 18:32:41,727] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 18:36:07 kafka | [2025-06-15 18:32:41,738] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,742] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:32:41,745] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,746] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:32:41,757] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 18:36:07 kafka | [2025-06-15 18:32:41,773] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 18:36:07 kafka | [2025-06-15 18:32:41,776] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 18:36:07 kafka | [2025-06-15 18:32:41,781] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 18:36:07 kafka | [2025-06-15 18:32:41,794] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,795] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 18:36:07 kafka | [2025-06-15 18:32:41,798] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,800] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,802] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,816] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 18:36:07 kafka | [2025-06-15 18:32:41,818] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,823] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,831] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 18:36:07 kafka | [2025-06-15 18:32:41,842] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 18:36:07 kafka | [2025-06-15 18:32:41,852] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 18:36:07 kafka | [2025-06-15 18:32:41,854] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,855] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,855] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,856] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,859] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,860] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,860] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,861] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 18:36:07 kafka | [2025-06-15 18:32:41,862] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,865] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 18:36:07 kafka | [2025-06-15 18:32:41,868] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:32:41,876] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 18:36:07 kafka | [2025-06-15 18:32:41,876] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 18:36:07 kafka | [2025-06-15 18:32:41,891] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) 18:36:07 kafka | [2025-06-15 18:32:41,891] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) 18:36:07 kafka | [2025-06-15 18:32:41,891] INFO Kafka startTimeMs: 1750012361870 (org.apache.kafka.common.utils.AppInfoParser) 18:36:07 kafka | [2025-06-15 18:32:41,893] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 18:36:07 kafka | [2025-06-15 18:32:41,896] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 18:36:07 kafka | [2025-06-15 18:32:41,896] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 18:36:07 kafka | [2025-06-15 18:32:41,897] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 18:36:07 kafka | [2025-06-15 18:32:41,903] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 18:36:07 kafka | [2025-06-15 18:32:41,905] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 18:36:07 kafka | [2025-06-15 18:32:41,909] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 18:36:07 kafka | [2025-06-15 18:32:41,909] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,916] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,916] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,917] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,917] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,918] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,933] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:41,954] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 18:36:07 kafka | [2025-06-15 18:32:41,966] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:32:41,990] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 18:36:07 kafka | [2025-06-15 18:32:46,934] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:32:46,935] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:33:15,464] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:33:15,466] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 18:36:07 kafka | [2025-06-15 18:33:15,467] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 18:36:07 kafka | [2025-06-15 18:33:15,516] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:33:15,537] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(6-WUUYKCSwWBY3ulYWg4Gg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:33:15,537] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:33:15,539] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,539] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,543] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,543] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,567] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,569] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,571] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,572] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,572] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,577] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,579] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,580] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(vY2hT0z9RN-xHZsW3qeLMQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,583] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,584] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,584] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,584] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,584] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,584] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,584] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,584] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,584] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,586] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,606] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,612] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 18:36:07 kafka | [2025-06-15 18:33:15,613] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,722] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:15,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,738] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,739] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,740] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,742] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,743] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,743] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,743] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,743] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,743] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,744] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:15,750] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,751] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,752] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,754] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(6-WUUYKCSwWBY3ulYWg4Gg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,774] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,781] INFO [Broker id=1] Finished LeaderAndIsr request in 205ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,786] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=6-WUUYKCSwWBY3ulYWg4Gg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,790] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,791] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,793] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,798] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,799] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,799] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,799] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,799] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,799] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,799] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,799] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,799] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,800] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,800] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,800] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,800] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,800] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,800] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,800] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,800] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,801] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,801] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,801] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,801] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,801] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,801] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,801] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,801] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,801] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,802] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,802] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,802] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,802] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,802] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,802] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,802] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,802] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,802] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,802] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,802] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,803] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,803] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,803] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,803] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,803] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,803] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,803] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,803] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,803] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,804] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,804] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,804] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,804] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,804] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,825] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,826] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,827] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,827] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,827] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,827] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,827] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,827] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,827] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,827] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,827] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,828] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,828] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,828] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,828] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,828] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,828] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,828] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,828] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,828] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,828] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,829] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,829] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,829] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,829] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,829] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,829] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,829] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,829] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,830] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,830] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 18:36:07 kafka | [2025-06-15 18:33:15,831] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,839] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:15,840] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:15,841] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,841] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,841] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,853] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:15,854] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:15,854] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,854] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,854] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,889] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:15,890] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:15,890] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,890] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,890] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,902] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:15,903] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:15,903] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,903] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,903] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,924] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:15,924] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:15,924] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,924] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,925] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,952] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:15,953] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:15,953] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,953] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,953] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,968] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:15,969] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:15,969] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,969] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,969] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:15,985] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:15,986] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:15,986] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,987] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:15,987] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,000] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,001] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,001] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,001] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,002] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,016] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,017] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,019] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,019] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,019] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,028] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,029] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,029] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,029] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,030] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,040] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,041] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,041] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,041] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,041] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,085] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,086] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,086] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,086] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,086] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,100] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,101] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,101] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,101] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,101] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,110] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,111] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,112] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,112] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,112] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,124] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,125] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,125] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,125] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,125] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,133] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,133] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,133] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,133] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,134] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,141] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,142] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,142] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,142] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,142] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,148] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,149] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,149] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,149] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,149] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,158] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,159] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,159] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,159] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,159] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,172] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,173] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,173] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,173] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,174] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,179] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,180] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,180] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,180] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,180] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,187] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,187] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,187] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,187] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,187] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,193] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,193] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,193] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,193] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,193] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,201] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,201] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,201] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,201] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,201] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,208] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,208] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,208] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,208] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,209] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,214] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,215] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,215] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,215] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,215] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,219] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,220] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,220] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,220] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,220] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,227] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,227] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,227] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,227] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,228] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,237] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,237] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,237] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,237] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,237] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,283] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,284] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,284] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,284] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,284] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,290] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,290] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,290] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,290] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,290] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,298] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,298] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,298] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,298] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,298] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,311] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,311] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,311] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,311] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,311] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,318] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,319] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,319] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,319] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,320] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,328] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,329] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,329] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,329] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,329] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,338] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,339] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,339] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,339] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,339] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,346] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,347] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,347] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,347] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,347] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,356] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,357] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,357] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,357] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,357] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,364] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,365] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,365] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,365] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,365] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,396] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,397] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,397] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,397] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,397] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,405] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,405] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,405] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,405] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,405] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,416] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,417] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,417] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,417] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,417] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,426] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,427] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,427] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,427] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,427] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,438] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,439] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,439] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,440] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,440] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,448] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,448] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,449] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,449] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,449] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,463] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,466] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,466] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,466] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,467] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,479] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,479] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,479] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,479] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,479] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,491] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,492] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,492] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,492] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,492] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,501] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:16,501] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:16,501] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,501] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:16,501] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(vY2hT0z9RN-xHZsW3qeLMQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,507] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,508] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,509] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,509] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,509] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,509] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,509] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,509] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,509] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,509] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,509] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,510] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:16,511] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,512] INFO [Broker id=1] Finished LeaderAndIsr request in 714ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,513] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=vY2hT0z9RN-xHZsW3qeLMQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,514] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 5 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,514] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,517] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 8 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,517] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,517] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,517] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,517] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,517] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,517] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,517] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,517] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,517] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,517] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,518] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,518] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,518] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,518] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,518] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,518] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,518] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,518] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,518] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,518] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,518] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,519] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,520] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,520] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,521] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,521] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,521] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,521] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,521] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,521] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,521] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,521] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,521] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:16,522] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,522] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,522] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,522] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:16,522] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 18:36:07 kafka | [2025-06-15 18:33:17,059] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group fa1957b5-0078-4a1f-ae83-bcab973764e3 in Empty state. Created a new member id consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3-1ce6a178-25c6-4671-99fc-a5d2d32fa8e9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:17,084] INFO [GroupCoordinator 1]: Preparing to rebalance group fa1957b5-0078-4a1f-ae83-bcab973764e3 in state PreparingRebalance with old generation 0 (__consumer_offsets-32) (reason: Adding new member consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3-1ce6a178-25c6-4671-99fc-a5d2d32fa8e9 with group instance id None; client reason: need to re-join with the given member-id: consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3-1ce6a178-25c6-4671-99fc-a5d2d32fa8e9) (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:17,191] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 38fd38ff-592e-4c56-927f-cdd1f27311ce in Empty state. Created a new member id consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2-4dda311c-3071-4b89-8df7-26c04c67b5ce and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:17,194] INFO [GroupCoordinator 1]: Preparing to rebalance group 38fd38ff-592e-4c56-927f-cdd1f27311ce in state PreparingRebalance with old generation 0 (__consumer_offsets-49) (reason: Adding new member consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2-4dda311c-3071-4b89-8df7-26c04c67b5ce with group instance id None; client reason: need to re-join with the given member-id: consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2-4dda311c-3071-4b89-8df7-26c04c67b5ce) (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:17,269] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-63f2110d-42d3-428c-997b-d65c58061d50 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:17,272] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-63f2110d-42d3-428c-997b-d65c58061d50 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-63f2110d-42d3-428c-997b-d65c58061d50) (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:20,096] INFO [GroupCoordinator 1]: Stabilized group fa1957b5-0078-4a1f-ae83-bcab973764e3 generation 1 (__consumer_offsets-32) with 1 members (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:20,118] INFO [GroupCoordinator 1]: Assignment received from leader consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3-1ce6a178-25c6-4671-99fc-a5d2d32fa8e9 for group fa1957b5-0078-4a1f-ae83-bcab973764e3 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:20,195] INFO [GroupCoordinator 1]: Stabilized group 38fd38ff-592e-4c56-927f-cdd1f27311ce generation 1 (__consumer_offsets-49) with 1 members (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:20,213] INFO [GroupCoordinator 1]: Assignment received from leader consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2-4dda311c-3071-4b89-8df7-26c04c67b5ce for group 38fd38ff-592e-4c56-927f-cdd1f27311ce for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:20,273] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:20,277] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-63f2110d-42d3-428c-997b-d65c58061d50 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:33:22,300] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 18:36:07 kafka | [2025-06-15 18:33:22,313] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(dgQfhKTqSSGyZ2FVnkePvg),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:33:22,313] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) 18:36:07 kafka | [2025-06-15 18:33:22,313] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,314] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,314] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,314] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,332] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,332] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,332] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,333] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,333] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,333] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,334] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 for 1 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,334] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 5 from controller 1 epoch 1 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,335] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,335] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) 18:36:07 kafka | [2025-06-15 18:33:22,335] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 5 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,339] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 18:36:07 kafka | [2025-06-15 18:33:22,340] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) 18:36:07 kafka | [2025-06-15 18:33:22,341] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:22,341] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) 18:36:07 kafka | [2025-06-15 18:33:22,341] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(dgQfhKTqSSGyZ2FVnkePvg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,349] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 5 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,350] INFO [Broker id=1] Finished LeaderAndIsr request in 16ms correlationId 5 from controller 1 for 1 partitions (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,351] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=dgQfhKTqSSGyZ2FVnkePvg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 5 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,353] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) 18:36:07 kafka | [2025-06-15 18:33:22,354] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 6 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 18:36:07 kafka | [2025-06-15 18:35:00,878] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-767cf13e-5b88-4c98-a23a-10dec537f085 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:35:00,880] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-767cf13e-5b88-4c98-a23a-10dec537f085 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:35:03,881] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:35:03,884] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-767cf13e-5b88-4c98-a23a-10dec537f085 for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:35:04,009] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-767cf13e-5b88-4c98-a23a-10dec537f085 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:35:04,010] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 18:36:07 kafka | [2025-06-15 18:35:04,013] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-767cf13e-5b88-4c98-a23a-10dec537f085, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 18:36:07 policy-api | Waiting for policy-db-migrator port 6824... 18:36:07 policy-api | policy-db-migrator (172.17.0.7:6824) open 18:36:07 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 18:36:07 policy-api | 18:36:07 policy-api | . ____ _ __ _ _ 18:36:07 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 18:36:07 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 18:36:07 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 18:36:07 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 18:36:07 policy-api | =========|_|==============|___/=/_/_/_/ 18:36:07 policy-api | 18:36:07 policy-api | :: Spring Boot :: (v3.4.6) 18:36:07 policy-api | 18:36:07 policy-api | [2025-06-15T18:32:54.340+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final 18:36:07 policy-api | [2025-06-15T18:32:54.409+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 37 (/app/api.jar started by policy in /opt/app/policy/api/bin) 18:36:07 policy-api | [2025-06-15T18:32:54.409+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" 18:36:07 policy-api | [2025-06-15T18:32:55.822+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 18:36:07 policy-api | [2025-06-15T18:32:55.989+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 157 ms. Found 6 JPA repository interfaces. 18:36:07 policy-api | [2025-06-15T18:32:56.647+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 18:36:07 policy-api | [2025-06-15T18:32:56.660+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 18:36:07 policy-api | [2025-06-15T18:32:56.662+00:00|INFO|StandardService|main] Starting service [Tomcat] 18:36:07 policy-api | [2025-06-15T18:32:56.662+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 18:36:07 policy-api | [2025-06-15T18:32:56.699+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 18:36:07 policy-api | [2025-06-15T18:32:56.699+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2230 ms 18:36:07 policy-api | [2025-06-15T18:32:57.000+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 18:36:07 policy-api | [2025-06-15T18:32:57.088+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 18:36:07 policy-api | [2025-06-15T18:32:57.138+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 18:36:07 policy-api | [2025-06-15T18:32:57.499+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 18:36:07 policy-api | [2025-06-15T18:32:57.532+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 18:36:07 policy-api | [2025-06-15T18:32:57.727+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@5342032a 18:36:07 policy-api | [2025-06-15T18:32:57.729+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 18:36:07 policy-api | [2025-06-15T18:32:57.814+00:00|INFO|pooling|main] HHH10001005: Database info: 18:36:07 policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 18:36:07 policy-api | Database driver: undefined/unknown 18:36:07 policy-api | Database version: 16.4 18:36:07 policy-api | Autocommit mode: undefined/unknown 18:36:07 policy-api | Isolation level: undefined/unknown 18:36:07 policy-api | Minimum pool size: undefined/unknown 18:36:07 policy-api | Maximum pool size: undefined/unknown 18:36:07 policy-api | [2025-06-15T18:32:59.747+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 18:36:07 policy-api | [2025-06-15T18:32:59.751+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 18:36:07 policy-api | [2025-06-15T18:33:00.393+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 18:36:07 policy-api | [2025-06-15T18:33:01.258+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 18:36:07 policy-api | [2025-06-15T18:33:02.408+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 18:36:07 policy-api | [2025-06-15T18:33:02.465+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 18:36:07 policy-api | [2025-06-15T18:33:03.149+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 18:36:07 policy-api | [2025-06-15T18:33:03.276+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 18:36:07 policy-api | [2025-06-15T18:33:03.301+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' 18:36:07 policy-api | [2025-06-15T18:33:03.322+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.657 seconds (process running for 10.239) 18:36:07 policy-api | [2025-06-15T18:33:39.916+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 18:36:07 policy-api | [2025-06-15T18:33:39.916+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 18:36:07 policy-api | [2025-06-15T18:33:39.918+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 18:36:07 policy-api | [2025-06-15T18:34:36.370+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 18:36:07 policy-api | [] 18:36:07 policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot 18:36:07 policy-csit | Run Robot test 18:36:07 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 18:36:07 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 18:36:07 policy-csit | -v POLICY_API_IP:policy-api:6969 18:36:07 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 18:36:07 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 18:36:07 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 18:36:07 policy-csit | -v APEX_IP:policy-apex-pdp:6969 18:36:07 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 18:36:07 policy-csit | -v KAFKA_IP:kafka:9092 18:36:07 policy-csit | -v PROMETHEUS_IP:prometheus:9090 18:36:07 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 18:36:07 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 18:36:07 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 18:36:07 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 18:36:07 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 18:36:07 policy-csit | -v TEMP_FOLDER:/tmp/distribution 18:36:07 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 18:36:07 policy-csit | -v TEST_ENV:docker 18:36:07 policy-csit | -v JAEGER_IP:jaeger:16686 18:36:07 policy-csit | Starting Robot test suites ... 18:36:07 policy-csit | ============================================================================== 18:36:07 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas 18:36:07 policy-csit | ============================================================================== 18:36:07 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test 18:36:07 policy-csit | ============================================================================== 18:36:07 policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | 18:36:07 policy-csit | ------------------------------------------------------------------------------ 18:36:07 policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | 18:36:07 policy-csit | ------------------------------------------------------------------------------ 18:36:07 policy-csit | MakeTopics :: Creates the Policy topics | PASS | 18:36:07 policy-csit | ------------------------------------------------------------------------------ 18:36:07 policy-csit | ExecuteXacmlPolicy | PASS | 18:36:07 policy-csit | ------------------------------------------------------------------------------ 18:36:07 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | 18:36:07 policy-csit | 4 tests, 4 passed, 0 failed 18:36:07 policy-csit | ============================================================================== 18:36:07 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas 18:36:07 policy-csit | ============================================================================== 18:36:07 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 18:36:07 policy-csit | ------------------------------------------------------------------------------ 18:36:07 policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | 18:36:07 policy-csit | ------------------------------------------------------------------------------ 18:36:07 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | 18:36:07 policy-csit | 2 tests, 2 passed, 0 failed 18:36:07 policy-csit | ============================================================================== 18:36:07 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | 18:36:07 policy-csit | 6 tests, 6 passed, 0 failed 18:36:07 policy-csit | ============================================================================== 18:36:07 policy-csit | Output: /tmp/results/output.xml 18:36:07 policy-csit | Log: /tmp/results/log.html 18:36:07 policy-csit | Report: /tmp/results/report.html 18:36:07 policy-csit | RESULT: 0 18:36:07 policy-db-migrator | Waiting for postgres port 5432... 18:36:07 policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused 18:36:07 policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused 18:36:07 policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused 18:36:07 policy-db-migrator | Connection to postgres (172.17.0.3) 5432 port [tcp/postgresql] succeeded! 18:36:07 policy-db-migrator | Initializing policyadmin... 18:36:07 policy-db-migrator | 321 blocks 18:36:07 policy-db-migrator | Preparing upgrade release version: 0800 18:36:07 policy-db-migrator | Preparing upgrade release version: 0900 18:36:07 policy-db-migrator | Preparing upgrade release version: 1000 18:36:07 policy-db-migrator | Preparing upgrade release version: 1100 18:36:07 policy-db-migrator | Preparing upgrade release version: 1200 18:36:07 policy-db-migrator | Preparing upgrade release version: 1300 18:36:07 policy-db-migrator | Done 18:36:07 policy-db-migrator | List of databases 18:36:07 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:07 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:07 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:07 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | (9 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | name | version 18:36:07 policy-db-migrator | -------------+--------- 18:36:07 policy-db-migrator | policyadmin | 0 18:36:07 policy-db-migrator | (1 row) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:07 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 18:36:07 policy-db-migrator | (0 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 18:36:07 policy-db-migrator | List of databases 18:36:07 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:07 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:07 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:07 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | (9 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 18:36:07 policy-db-migrator | upgrade: 0 -> 1300 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0450-pdpgroup.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0470-pdp.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0570-toscadatatype.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0630-toscanodetype.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0660-toscaparameter.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0670-toscapolicies.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0690-toscapolicy.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0730-toscaproperty.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0770-toscarequirement.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0780-toscarequirements.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0820-toscatrigger.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-pdp.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0210-sequence.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0220-sequence.sql 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0120-toscatrigger.sql 18:36:07 policy-db-migrator | DROP TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0140-toscaparameter.sql 18:36:07 policy-db-migrator | DROP TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0150-toscaproperty.sql 18:36:07 policy-db-migrator | DROP TABLE 18:36:07 policy-db-migrator | DROP TABLE 18:36:07 policy-db-migrator | DROP TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-upgrade.sql 18:36:07 policy-db-migrator | msg 18:36:07 policy-db-migrator | --------------------------- 18:36:07 policy-db-migrator | upgrade to 1100 completed 18:36:07 policy-db-migrator | (1 row) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 18:36:07 policy-db-migrator | DROP INDEX 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0120-audit_sequence.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 18:36:07 policy-db-migrator | DROP TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 18:36:07 policy-db-migrator | DROP TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 18:36:07 policy-db-migrator | DROP TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | policyadmin: OK: upgrade (1300) 18:36:07 policy-db-migrator | List of databases 18:36:07 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:07 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:07 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:07 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | (9 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 18:36:07 policy-db-migrator | name | version 18:36:07 policy-db-migrator | -------------+--------- 18:36:07 policy-db-migrator | policyadmin | 1300 18:36:07 policy-db-migrator | (1 row) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:07 policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 18:36:07 policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:41.627255 18:36:07 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:41.67404 18:36:07 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:41.729993 18:36:07 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:41.780398 18:36:07 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:41.827984 18:36:07 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:41.881712 18:36:07 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:41.949185 18:36:07 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:41.996431 18:36:07 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.044079 18:36:07 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.093617 18:36:07 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.139344 18:36:07 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.1874 18:36:07 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.275623 18:36:07 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.32318 18:36:07 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.375142 18:36:07 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.429036 18:36:07 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.477402 18:36:07 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.545085 18:36:07 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.595153 18:36:07 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.642423 18:36:07 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.690641 18:36:07 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.741118 18:36:07 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.789403 18:36:07 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.845303 18:36:07 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.892771 18:36:07 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.943204 18:36:07 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:42.988165 18:36:07 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.044736 18:36:07 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.088212 18:36:07 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.167887 18:36:07 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.215018 18:36:07 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.275359 18:36:07 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.325684 18:36:07 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.377523 18:36:07 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.427904 18:36:07 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.487623 18:36:07 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.535932 18:36:07 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.586624 18:36:07 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.632908 18:36:07 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.689505 18:36:07 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.733782 18:36:07 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.82511 18:36:07 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.876338 18:36:07 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.9308 18:36:07 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:43.987628 18:36:07 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.036881 18:36:07 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.087604 18:36:07 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.141284 18:36:07 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.190723 18:36:07 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.238893 18:36:07 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.296146 18:36:07 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.380192 18:36:07 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.43295 18:36:07 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.489803 18:36:07 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.541853 18:36:07 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.595582 18:36:07 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.648227 18:36:07 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.732951 18:36:07 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.780865 18:36:07 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.834676 18:36:07 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.889175 18:36:07 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:44.940229 18:36:07 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.029347 18:36:07 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.083349 18:36:07 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.135874 18:36:07 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.189687 18:36:07 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.246071 18:36:07 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.299177 18:36:07 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.362192 18:36:07 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.411332 18:36:07 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.465658 18:36:07 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.52181 18:36:07 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.579706 18:36:07 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.670945 18:36:07 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.721232 18:36:07 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.765768 18:36:07 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.812588 18:36:07 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.861047 18:36:07 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.912133 18:36:07 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:45.962498 18:36:07 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.010573 18:36:07 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.059794 18:36:07 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.108885 18:36:07 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.160838 18:36:07 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.234502 18:36:07 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.283368 18:36:07 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.333571 18:36:07 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.38189 18:36:07 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.43191 18:36:07 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.481048 18:36:07 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.532518 18:36:07 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.581013 18:36:07 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.629215 18:36:07 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.677048 18:36:07 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.72574 18:36:07 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1506251832410800u | 1 | 2025-06-15 18:32:46.774266 18:36:07 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1506251832410900u | 1 | 2025-06-15 18:32:46.841401 18:36:07 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1506251832410900u | 1 | 2025-06-15 18:32:46.889365 18:36:07 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1506251832410900u | 1 | 2025-06-15 18:32:46.939687 18:36:07 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1506251832410900u | 1 | 2025-06-15 18:32:46.986952 18:36:07 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1506251832410900u | 1 | 2025-06-15 18:32:47.042083 18:36:07 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1506251832410900u | 1 | 2025-06-15 18:32:47.092084 18:36:07 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1506251832410900u | 1 | 2025-06-15 18:32:47.153535 18:36:07 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1506251832410900u | 1 | 2025-06-15 18:32:47.203027 18:36:07 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1506251832410900u | 1 | 2025-06-15 18:32:47.25092 18:36:07 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1506251832410900u | 1 | 2025-06-15 18:32:47.309802 18:36:07 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1506251832410900u | 1 | 2025-06-15 18:32:47.361553 18:36:07 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1506251832410900u | 1 | 2025-06-15 18:32:47.419537 18:36:07 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1506251832410900u | 1 | 2025-06-15 18:32:47.46883 18:36:07 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1506251832411000u | 1 | 2025-06-15 18:32:47.521484 18:36:07 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1506251832411000u | 1 | 2025-06-15 18:32:47.573077 18:36:07 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1506251832411000u | 1 | 2025-06-15 18:32:47.621272 18:36:07 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1506251832411000u | 1 | 2025-06-15 18:32:47.677458 18:36:07 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1506251832411000u | 1 | 2025-06-15 18:32:47.740757 18:36:07 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1506251832411000u | 1 | 2025-06-15 18:32:47.79437 18:36:07 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1506251832411000u | 1 | 2025-06-15 18:32:47.850156 18:36:07 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1506251832411000u | 1 | 2025-06-15 18:32:47.900759 18:36:07 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1506251832411000u | 1 | 2025-06-15 18:32:47.949186 18:36:07 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1506251832411100u | 1 | 2025-06-15 18:32:47.998196 18:36:07 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1506251832411200u | 1 | 2025-06-15 18:32:48.05854 18:36:07 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1506251832411200u | 1 | 2025-06-15 18:32:48.114404 18:36:07 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1506251832411200u | 1 | 2025-06-15 18:32:48.168624 18:36:07 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1506251832411200u | 1 | 2025-06-15 18:32:48.220975 18:36:07 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1506251832411300u | 1 | 2025-06-15 18:32:48.269169 18:36:07 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1506251832411300u | 1 | 2025-06-15 18:32:48.317673 18:36:07 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1506251832411300u | 1 | 2025-06-15 18:32:48.373368 18:36:07 policy-db-migrator | (126 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | policyadmin: OK @ 1300 18:36:07 policy-db-migrator | Initializing clampacm... 18:36:07 policy-db-migrator | 97 blocks 18:36:07 policy-db-migrator | Preparing upgrade release version: 1400 18:36:07 policy-db-migrator | Preparing upgrade release version: 1500 18:36:07 policy-db-migrator | Preparing upgrade release version: 1600 18:36:07 policy-db-migrator | Preparing upgrade release version: 1601 18:36:07 policy-db-migrator | Preparing upgrade release version: 1700 18:36:07 policy-db-migrator | Preparing upgrade release version: 1701 18:36:07 policy-db-migrator | Done 18:36:07 policy-db-migrator | List of databases 18:36:07 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:07 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:07 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:07 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | (9 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | name | version 18:36:07 policy-db-migrator | ----------+--------- 18:36:07 policy-db-migrator | clampacm | 0 18:36:07 policy-db-migrator | (1 row) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:07 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 18:36:07 policy-db-migrator | (0 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | clampacm: upgrade available: 0 -> 1701 18:36:07 policy-db-migrator | List of databases 18:36:07 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:07 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:07 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:07 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | (9 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | upgrade: 0 -> 1701 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-automationcomposition.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0400-nodetemplatestate.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0500-participant.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0600-participantsupportedelements.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-automationcomposition.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0300-participantreplica.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0400-participant.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0600-participant_replica_fk.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0700-automationcompositionelement.sql 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0800-nodetemplatestate.sql 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-automationcomposition.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-automationcomposition.sql 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-message.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0200-messagejob.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0200-automationcomposition.sql 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0600-nodetemplatestate.sql 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0800-participantreplica.sql 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | UPDATE 0 18:36:07 policy-db-migrator | ALTER TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | clampacm: OK: upgrade (1701) 18:36:07 policy-db-migrator | List of databases 18:36:07 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:07 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:07 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:07 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | (9 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 18:36:07 policy-db-migrator | name | version 18:36:07 policy-db-migrator | ----------+--------- 18:36:07 policy-db-migrator | clampacm | 1701 18:36:07 policy-db-migrator | (1 row) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:07 policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 18:36:07 policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1506251832481400u | 1 | 2025-06-15 18:32:49.0251 18:36:07 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1506251832481400u | 1 | 2025-06-15 18:32:49.083047 18:36:07 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1506251832481400u | 1 | 2025-06-15 18:32:49.140545 18:36:07 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1506251832481400u | 1 | 2025-06-15 18:32:49.195486 18:36:07 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1506251832481400u | 1 | 2025-06-15 18:32:49.283551 18:36:07 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1506251832481400u | 1 | 2025-06-15 18:32:49.338362 18:36:07 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1506251832481400u | 1 | 2025-06-15 18:32:49.392807 18:36:07 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1506251832481400u | 1 | 2025-06-15 18:32:49.445458 18:36:07 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1506251832481400u | 1 | 2025-06-15 18:32:49.497012 18:36:07 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1506251832481400u | 1 | 2025-06-15 18:32:49.546372 18:36:07 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1506251832481400u | 1 | 2025-06-15 18:32:49.599501 18:36:07 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1506251832481400u | 1 | 2025-06-15 18:32:49.649287 18:36:07 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1506251832481400u | 1 | 2025-06-15 18:32:49.697116 18:36:07 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1506251832481500u | 1 | 2025-06-15 18:32:49.744225 18:36:07 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1506251832481500u | 1 | 2025-06-15 18:32:49.793005 18:36:07 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1506251832481500u | 1 | 2025-06-15 18:32:49.845639 18:36:07 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1506251832481500u | 1 | 2025-06-15 18:32:49.914872 18:36:07 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1506251832481500u | 1 | 2025-06-15 18:32:49.96687 18:36:07 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1506251832481500u | 1 | 2025-06-15 18:32:50.01864 18:36:07 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1506251832481500u | 1 | 2025-06-15 18:32:50.065489 18:36:07 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1506251832481500u | 1 | 2025-06-15 18:32:50.115197 18:36:07 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1506251832481600u | 1 | 2025-06-15 18:32:50.15678 18:36:07 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1506251832481600u | 1 | 2025-06-15 18:32:50.210819 18:36:07 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1506251832481601u | 1 | 2025-06-15 18:32:50.263041 18:36:07 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1506251832481601u | 1 | 2025-06-15 18:32:50.308816 18:36:07 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1506251832481700u | 1 | 2025-06-15 18:32:50.363106 18:36:07 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1506251832481700u | 1 | 2025-06-15 18:32:50.41816 18:36:07 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1506251832481700u | 1 | 2025-06-15 18:32:50.493613 18:36:07 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1506251832481701u | 1 | 2025-06-15 18:32:50.54845 18:36:07 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1506251832481701u | 1 | 2025-06-15 18:32:50.601994 18:36:07 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1506251832481701u | 1 | 2025-06-15 18:32:50.64995 18:36:07 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1506251832481701u | 1 | 2025-06-15 18:32:50.700548 18:36:07 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1506251832481701u | 1 | 2025-06-15 18:32:50.75077 18:36:07 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1506251832481701u | 1 | 2025-06-15 18:32:50.832449 18:36:07 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1506251832481701u | 1 | 2025-06-15 18:32:50.883264 18:36:07 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1506251832481701u | 1 | 2025-06-15 18:32:50.932878 18:36:07 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1506251832481701u | 1 | 2025-06-15 18:32:50.983279 18:36:07 policy-db-migrator | (37 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | clampacm: OK @ 1701 18:36:07 policy-db-migrator | Initializing pooling... 18:36:07 policy-db-migrator | 4 blocks 18:36:07 policy-db-migrator | Preparing upgrade release version: 1600 18:36:07 policy-db-migrator | Done 18:36:07 policy-db-migrator | List of databases 18:36:07 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:07 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:07 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:07 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | (9 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | name | version 18:36:07 policy-db-migrator | ---------+--------- 18:36:07 policy-db-migrator | pooling | 0 18:36:07 policy-db-migrator | (1 row) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:07 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 18:36:07 policy-db-migrator | (0 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | pooling: upgrade available: 0 -> 1600 18:36:07 policy-db-migrator | List of databases 18:36:07 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:07 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:07 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:07 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | (9 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 18:36:07 policy-db-migrator | upgrade: 0 -> 1600 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-distributed.locking.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | pooling: OK: upgrade (1600) 18:36:07 policy-db-migrator | List of databases 18:36:07 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:07 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:07 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:07 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | (9 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 18:36:07 policy-db-migrator | name | version 18:36:07 policy-db-migrator | ---------+--------- 18:36:07 policy-db-migrator | pooling | 1600 18:36:07 policy-db-migrator | (1 row) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:07 policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 18:36:07 policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1506251832511600u | 1 | 2025-06-15 18:32:51.631002 18:36:07 policy-db-migrator | (1 row) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | pooling: OK @ 1600 18:36:07 policy-db-migrator | Initializing operationshistory... 18:36:07 policy-db-migrator | 6 blocks 18:36:07 policy-db-migrator | Preparing upgrade release version: 1600 18:36:07 policy-db-migrator | Done 18:36:07 policy-db-migrator | List of databases 18:36:07 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:07 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:07 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:07 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | (9 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | name | version 18:36:07 policy-db-migrator | -------------------+--------- 18:36:07 policy-db-migrator | operationshistory | 0 18:36:07 policy-db-migrator | (1 row) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:07 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 18:36:07 policy-db-migrator | (0 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 18:36:07 policy-db-migrator | List of databases 18:36:07 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:07 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:07 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:07 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | (9 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | upgrade: 0 -> 1600 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | rc=0 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | > upgrade 0110-operationshistory.sql 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | CREATE INDEX 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | INSERT 0 1 18:36:07 policy-db-migrator | operationshistory: OK: upgrade (1600) 18:36:07 policy-db-migrator | List of databases 18:36:07 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 18:36:07 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 18:36:07 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 18:36:07 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 18:36:07 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 18:36:07 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 18:36:07 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 18:36:07 policy-db-migrator | (9 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 18:36:07 policy-db-migrator | CREATE TABLE 18:36:07 policy-db-migrator | name | version 18:36:07 policy-db-migrator | -------------------+--------- 18:36:07 policy-db-migrator | operationshistory | 1600 18:36:07 policy-db-migrator | (1 row) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 18:36:07 policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 18:36:07 policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1506251832521600u | 1 | 2025-06-15 18:32:52.234178 18:36:07 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1506251832521600u | 1 | 2025-06-15 18:32:52.297961 18:36:07 policy-db-migrator | (2 rows) 18:36:07 policy-db-migrator | 18:36:07 policy-db-migrator | operationshistory: OK @ 1600 18:36:07 policy-pap | Waiting for api port 6969... 18:36:07 policy-pap | api (172.17.0.8:6969) open 18:36:07 policy-pap | Waiting for kafka port 9092... 18:36:07 policy-pap | kafka (172.17.0.6:9092) open 18:36:07 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 18:36:07 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 18:36:07 policy-pap | 18:36:07 policy-pap | . ____ _ __ _ _ 18:36:07 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 18:36:07 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 18:36:07 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 18:36:07 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 18:36:07 policy-pap | =========|_|==============|___/=/_/_/_/ 18:36:07 policy-pap | 18:36:07 policy-pap | :: Spring Boot :: (v3.4.6) 18:36:07 policy-pap | 18:36:07 policy-pap | [2025-06-15T18:33:05.949+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 59 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 18:36:07 policy-pap | [2025-06-15T18:33:05.951+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" 18:36:07 policy-pap | [2025-06-15T18:33:07.312+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 18:36:07 policy-pap | [2025-06-15T18:33:07.401+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 77 ms. Found 7 JPA repository interfaces. 18:36:07 policy-pap | [2025-06-15T18:33:08.412+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 18:36:07 policy-pap | [2025-06-15T18:33:08.424+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 18:36:07 policy-pap | [2025-06-15T18:33:08.426+00:00|INFO|StandardService|main] Starting service [Tomcat] 18:36:07 policy-pap | [2025-06-15T18:33:08.426+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 18:36:07 policy-pap | [2025-06-15T18:33:08.485+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 18:36:07 policy-pap | [2025-06-15T18:33:08.485+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2477 ms 18:36:07 policy-pap | [2025-06-15T18:33:08.926+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 18:36:07 policy-pap | [2025-06-15T18:33:09.009+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 18:36:07 policy-pap | [2025-06-15T18:33:09.055+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 18:36:07 policy-pap | [2025-06-15T18:33:09.422+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 18:36:07 policy-pap | [2025-06-15T18:33:09.464+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 18:36:07 policy-pap | [2025-06-15T18:33:09.692+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@1d6a22dd 18:36:07 policy-pap | [2025-06-15T18:33:09.694+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 18:36:07 policy-pap | [2025-06-15T18:33:09.788+00:00|INFO|pooling|main] HHH10001005: Database info: 18:36:07 policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 18:36:07 policy-pap | Database driver: undefined/unknown 18:36:07 policy-pap | Database version: 16.4 18:36:07 policy-pap | Autocommit mode: undefined/unknown 18:36:07 policy-pap | Isolation level: undefined/unknown 18:36:07 policy-pap | Minimum pool size: undefined/unknown 18:36:07 policy-pap | Maximum pool size: undefined/unknown 18:36:07 policy-pap | [2025-06-15T18:33:11.887+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 18:36:07 policy-pap | [2025-06-15T18:33:11.891+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 18:36:07 policy-pap | [2025-06-15T18:33:13.169+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 18:36:07 policy-pap | allow.auto.create.topics = true 18:36:07 policy-pap | auto.commit.interval.ms = 5000 18:36:07 policy-pap | auto.include.jmx.reporter = true 18:36:07 policy-pap | auto.offset.reset = latest 18:36:07 policy-pap | bootstrap.servers = [kafka:9092] 18:36:07 policy-pap | check.crcs = true 18:36:07 policy-pap | client.dns.lookup = use_all_dns_ips 18:36:07 policy-pap | client.id = consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-1 18:36:07 policy-pap | client.rack = 18:36:07 policy-pap | connections.max.idle.ms = 540000 18:36:07 policy-pap | default.api.timeout.ms = 60000 18:36:07 policy-pap | enable.auto.commit = true 18:36:07 policy-pap | enable.metrics.push = true 18:36:07 policy-pap | exclude.internal.topics = true 18:36:07 policy-pap | fetch.max.bytes = 52428800 18:36:07 policy-pap | fetch.max.wait.ms = 500 18:36:07 policy-pap | fetch.min.bytes = 1 18:36:07 policy-pap | group.id = fa1957b5-0078-4a1f-ae83-bcab973764e3 18:36:07 policy-pap | group.instance.id = null 18:36:07 policy-pap | group.protocol = classic 18:36:07 policy-pap | group.remote.assignor = null 18:36:07 policy-pap | heartbeat.interval.ms = 3000 18:36:07 policy-pap | interceptor.classes = [] 18:36:07 policy-pap | internal.leave.group.on.close = true 18:36:07 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 18:36:07 policy-pap | isolation.level = read_uncommitted 18:36:07 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:07 policy-pap | max.partition.fetch.bytes = 1048576 18:36:07 policy-pap | max.poll.interval.ms = 300000 18:36:07 policy-pap | max.poll.records = 500 18:36:07 policy-pap | metadata.max.age.ms = 300000 18:36:07 policy-pap | metadata.recovery.strategy = none 18:36:07 policy-pap | metric.reporters = [] 18:36:07 policy-pap | metrics.num.samples = 2 18:36:07 policy-pap | metrics.recording.level = INFO 18:36:07 policy-pap | metrics.sample.window.ms = 30000 18:36:07 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 18:36:07 policy-pap | receive.buffer.bytes = 65536 18:36:07 policy-pap | reconnect.backoff.max.ms = 1000 18:36:07 policy-pap | reconnect.backoff.ms = 50 18:36:07 policy-pap | request.timeout.ms = 30000 18:36:07 policy-pap | retry.backoff.max.ms = 1000 18:36:07 policy-pap | retry.backoff.ms = 100 18:36:07 policy-pap | sasl.client.callback.handler.class = null 18:36:07 policy-pap | sasl.jaas.config = null 18:36:07 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:07 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 18:36:07 policy-pap | sasl.kerberos.service.name = null 18:36:07 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:07 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:07 policy-pap | sasl.login.callback.handler.class = null 18:36:07 policy-pap | sasl.login.class = null 18:36:07 policy-pap | sasl.login.connect.timeout.ms = null 18:36:07 policy-pap | sasl.login.read.timeout.ms = null 18:36:07 policy-pap | sasl.login.refresh.buffer.seconds = 300 18:36:07 policy-pap | sasl.login.refresh.min.period.seconds = 60 18:36:07 policy-pap | sasl.login.refresh.window.factor = 0.8 18:36:07 policy-pap | sasl.login.refresh.window.jitter = 0.05 18:36:07 policy-pap | sasl.login.retry.backoff.max.ms = 10000 18:36:07 policy-pap | sasl.login.retry.backoff.ms = 100 18:36:07 policy-pap | sasl.mechanism = GSSAPI 18:36:07 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 18:36:07 policy-pap | sasl.oauthbearer.expected.audience = null 18:36:07 policy-pap | sasl.oauthbearer.expected.issuer = null 18:36:07 policy-pap | sasl.oauthbearer.header.urlencode = false 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 18:36:07 policy-pap | sasl.oauthbearer.scope.claim.name = scope 18:36:07 policy-pap | sasl.oauthbearer.sub.claim.name = sub 18:36:07 policy-pap | sasl.oauthbearer.token.endpoint.url = null 18:36:07 policy-pap | security.protocol = PLAINTEXT 18:36:07 policy-pap | security.providers = null 18:36:07 policy-pap | send.buffer.bytes = 131072 18:36:07 policy-pap | session.timeout.ms = 45000 18:36:07 policy-pap | socket.connection.setup.timeout.max.ms = 30000 18:36:07 policy-pap | socket.connection.setup.timeout.ms = 10000 18:36:07 policy-pap | ssl.cipher.suites = null 18:36:07 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:07 policy-pap | ssl.endpoint.identification.algorithm = https 18:36:07 policy-pap | ssl.engine.factory.class = null 18:36:07 policy-pap | ssl.key.password = null 18:36:07 policy-pap | ssl.keymanager.algorithm = SunX509 18:36:07 policy-pap | ssl.keystore.certificate.chain = null 18:36:07 policy-pap | ssl.keystore.key = null 18:36:07 policy-pap | ssl.keystore.location = null 18:36:07 policy-pap | ssl.keystore.password = null 18:36:07 policy-pap | ssl.keystore.type = JKS 18:36:07 policy-pap | ssl.protocol = TLSv1.3 18:36:07 policy-pap | ssl.provider = null 18:36:07 policy-pap | ssl.secure.random.implementation = null 18:36:07 policy-pap | ssl.trustmanager.algorithm = PKIX 18:36:07 policy-pap | ssl.truststore.certificates = null 18:36:07 policy-pap | ssl.truststore.location = null 18:36:07 policy-pap | ssl.truststore.password = null 18:36:07 policy-pap | ssl.truststore.type = JKS 18:36:07 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:07 policy-pap | 18:36:07 policy-pap | [2025-06-15T18:33:13.228+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:07 policy-pap | [2025-06-15T18:33:13.372+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:07 policy-pap | [2025-06-15T18:33:13.372+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:07 policy-pap | [2025-06-15T18:33:13.373+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750012393370 18:36:07 policy-pap | [2025-06-15T18:33:13.375+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-1, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] Subscribed to topic(s): policy-pdp-pap 18:36:07 policy-pap | [2025-06-15T18:33:13.376+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 18:36:07 policy-pap | allow.auto.create.topics = true 18:36:07 policy-pap | auto.commit.interval.ms = 5000 18:36:07 policy-pap | auto.include.jmx.reporter = true 18:36:07 policy-pap | auto.offset.reset = latest 18:36:07 policy-pap | bootstrap.servers = [kafka:9092] 18:36:07 policy-pap | check.crcs = true 18:36:07 policy-pap | client.dns.lookup = use_all_dns_ips 18:36:07 policy-pap | client.id = consumer-policy-pap-2 18:36:07 policy-pap | client.rack = 18:36:07 policy-pap | connections.max.idle.ms = 540000 18:36:07 policy-pap | default.api.timeout.ms = 60000 18:36:07 policy-pap | enable.auto.commit = true 18:36:07 policy-pap | enable.metrics.push = true 18:36:07 policy-pap | exclude.internal.topics = true 18:36:07 policy-pap | fetch.max.bytes = 52428800 18:36:07 policy-pap | fetch.max.wait.ms = 500 18:36:07 policy-pap | fetch.min.bytes = 1 18:36:07 policy-pap | group.id = policy-pap 18:36:07 policy-pap | group.instance.id = null 18:36:07 policy-pap | group.protocol = classic 18:36:07 policy-pap | group.remote.assignor = null 18:36:07 policy-pap | heartbeat.interval.ms = 3000 18:36:07 policy-pap | interceptor.classes = [] 18:36:07 policy-pap | internal.leave.group.on.close = true 18:36:07 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 18:36:07 policy-pap | isolation.level = read_uncommitted 18:36:07 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:07 policy-pap | max.partition.fetch.bytes = 1048576 18:36:07 policy-pap | max.poll.interval.ms = 300000 18:36:07 policy-pap | max.poll.records = 500 18:36:07 policy-pap | metadata.max.age.ms = 300000 18:36:07 policy-pap | metadata.recovery.strategy = none 18:36:07 policy-pap | metric.reporters = [] 18:36:07 policy-pap | metrics.num.samples = 2 18:36:07 policy-pap | metrics.recording.level = INFO 18:36:07 policy-pap | metrics.sample.window.ms = 30000 18:36:07 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 18:36:07 policy-pap | receive.buffer.bytes = 65536 18:36:07 policy-pap | reconnect.backoff.max.ms = 1000 18:36:07 policy-pap | reconnect.backoff.ms = 50 18:36:07 policy-pap | request.timeout.ms = 30000 18:36:07 policy-pap | retry.backoff.max.ms = 1000 18:36:07 policy-pap | retry.backoff.ms = 100 18:36:07 policy-pap | sasl.client.callback.handler.class = null 18:36:07 policy-pap | sasl.jaas.config = null 18:36:07 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:07 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 18:36:07 policy-pap | sasl.kerberos.service.name = null 18:36:07 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:07 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:07 policy-pap | sasl.login.callback.handler.class = null 18:36:07 policy-pap | sasl.login.class = null 18:36:07 policy-pap | sasl.login.connect.timeout.ms = null 18:36:07 policy-pap | sasl.login.read.timeout.ms = null 18:36:07 policy-pap | sasl.login.refresh.buffer.seconds = 300 18:36:07 policy-pap | sasl.login.refresh.min.period.seconds = 60 18:36:07 policy-pap | sasl.login.refresh.window.factor = 0.8 18:36:07 policy-pap | sasl.login.refresh.window.jitter = 0.05 18:36:07 policy-pap | sasl.login.retry.backoff.max.ms = 10000 18:36:07 policy-pap | sasl.login.retry.backoff.ms = 100 18:36:07 policy-pap | sasl.mechanism = GSSAPI 18:36:07 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 18:36:07 policy-pap | sasl.oauthbearer.expected.audience = null 18:36:07 policy-pap | sasl.oauthbearer.expected.issuer = null 18:36:07 policy-pap | sasl.oauthbearer.header.urlencode = false 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 18:36:07 policy-pap | sasl.oauthbearer.scope.claim.name = scope 18:36:07 policy-pap | sasl.oauthbearer.sub.claim.name = sub 18:36:07 policy-pap | sasl.oauthbearer.token.endpoint.url = null 18:36:07 policy-pap | security.protocol = PLAINTEXT 18:36:07 policy-pap | security.providers = null 18:36:07 policy-pap | send.buffer.bytes = 131072 18:36:07 policy-pap | session.timeout.ms = 45000 18:36:07 policy-pap | socket.connection.setup.timeout.max.ms = 30000 18:36:07 policy-pap | socket.connection.setup.timeout.ms = 10000 18:36:07 policy-pap | ssl.cipher.suites = null 18:36:07 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:07 policy-pap | ssl.endpoint.identification.algorithm = https 18:36:07 policy-pap | ssl.engine.factory.class = null 18:36:07 policy-pap | ssl.key.password = null 18:36:07 policy-pap | ssl.keymanager.algorithm = SunX509 18:36:07 policy-pap | ssl.keystore.certificate.chain = null 18:36:07 policy-pap | ssl.keystore.key = null 18:36:07 policy-pap | ssl.keystore.location = null 18:36:07 policy-pap | ssl.keystore.password = null 18:36:07 policy-pap | ssl.keystore.type = JKS 18:36:07 policy-pap | ssl.protocol = TLSv1.3 18:36:07 policy-pap | ssl.provider = null 18:36:07 policy-pap | ssl.secure.random.implementation = null 18:36:07 policy-pap | ssl.trustmanager.algorithm = PKIX 18:36:07 policy-pap | ssl.truststore.certificates = null 18:36:07 policy-pap | ssl.truststore.location = null 18:36:07 policy-pap | ssl.truststore.password = null 18:36:07 policy-pap | ssl.truststore.type = JKS 18:36:07 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:07 policy-pap | 18:36:07 policy-pap | [2025-06-15T18:33:13.376+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:07 policy-pap | [2025-06-15T18:33:13.384+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:07 policy-pap | [2025-06-15T18:33:13.384+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:07 policy-pap | [2025-06-15T18:33:13.384+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750012393384 18:36:07 policy-pap | [2025-06-15T18:33:13.384+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 18:36:07 policy-pap | [2025-06-15T18:33:13.714+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=xacml, supportedPolicyTypes=[onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0, onap.policies.monitoring.* 1.0.0, onap.policies.optimization.* 1.0.0, onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0, onap.policies.native.Xacml 1.0.0, onap.policies.Naming 1.0.0, onap.policies.match.* 1.0.0], policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 18:36:07 policy-pap | [2025-06-15T18:33:13.825+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 18:36:07 policy-pap | [2025-06-15T18:33:13.898+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 18:36:07 policy-pap | [2025-06-15T18:33:14.094+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. 18:36:07 policy-pap | [2025-06-15T18:33:14.772+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 18:36:07 policy-pap | [2025-06-15T18:33:14.873+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 18:36:07 policy-pap | [2025-06-15T18:33:14.896+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' 18:36:07 policy-pap | [2025-06-15T18:33:14.921+00:00|INFO|ServiceManager|main] Policy PAP starting 18:36:07 policy-pap | [2025-06-15T18:33:14.921+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 18:36:07 policy-pap | [2025-06-15T18:33:14.922+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 18:36:07 policy-pap | [2025-06-15T18:33:14.922+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 18:36:07 policy-pap | [2025-06-15T18:33:14.922+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 18:36:07 policy-pap | [2025-06-15T18:33:14.923+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 18:36:07 policy-pap | [2025-06-15T18:33:14.923+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 18:36:07 policy-pap | [2025-06-15T18:33:14.924+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=fa1957b5-0078-4a1f-ae83-bcab973764e3, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@34ceabf1 18:36:07 policy-pap | [2025-06-15T18:33:14.934+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=fa1957b5-0078-4a1f-ae83-bcab973764e3, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 18:36:07 policy-pap | [2025-06-15T18:33:14.935+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 18:36:07 policy-pap | allow.auto.create.topics = true 18:36:07 policy-pap | auto.commit.interval.ms = 5000 18:36:07 policy-pap | auto.include.jmx.reporter = true 18:36:07 policy-pap | auto.offset.reset = latest 18:36:07 policy-pap | bootstrap.servers = [kafka:9092] 18:36:07 policy-pap | check.crcs = true 18:36:07 policy-pap | client.dns.lookup = use_all_dns_ips 18:36:07 policy-pap | client.id = consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3 18:36:07 policy-pap | client.rack = 18:36:07 policy-pap | connections.max.idle.ms = 540000 18:36:07 policy-pap | default.api.timeout.ms = 60000 18:36:07 policy-pap | enable.auto.commit = true 18:36:07 policy-pap | enable.metrics.push = true 18:36:07 policy-pap | exclude.internal.topics = true 18:36:07 policy-pap | fetch.max.bytes = 52428800 18:36:07 policy-pap | fetch.max.wait.ms = 500 18:36:07 policy-pap | fetch.min.bytes = 1 18:36:07 policy-pap | group.id = fa1957b5-0078-4a1f-ae83-bcab973764e3 18:36:07 policy-pap | group.instance.id = null 18:36:07 policy-pap | group.protocol = classic 18:36:07 policy-pap | group.remote.assignor = null 18:36:07 policy-pap | heartbeat.interval.ms = 3000 18:36:07 policy-pap | interceptor.classes = [] 18:36:07 policy-pap | internal.leave.group.on.close = true 18:36:07 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 18:36:07 policy-pap | isolation.level = read_uncommitted 18:36:07 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:07 policy-pap | max.partition.fetch.bytes = 1048576 18:36:07 policy-pap | max.poll.interval.ms = 300000 18:36:07 policy-pap | max.poll.records = 500 18:36:07 policy-pap | metadata.max.age.ms = 300000 18:36:07 policy-pap | metadata.recovery.strategy = none 18:36:07 policy-pap | metric.reporters = [] 18:36:07 policy-pap | metrics.num.samples = 2 18:36:07 policy-pap | metrics.recording.level = INFO 18:36:07 policy-pap | metrics.sample.window.ms = 30000 18:36:07 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 18:36:07 policy-pap | receive.buffer.bytes = 65536 18:36:07 policy-pap | reconnect.backoff.max.ms = 1000 18:36:07 policy-pap | reconnect.backoff.ms = 50 18:36:07 policy-pap | request.timeout.ms = 30000 18:36:07 policy-pap | retry.backoff.max.ms = 1000 18:36:07 policy-pap | retry.backoff.ms = 100 18:36:07 policy-pap | sasl.client.callback.handler.class = null 18:36:07 policy-pap | sasl.jaas.config = null 18:36:07 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:07 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 18:36:07 policy-pap | sasl.kerberos.service.name = null 18:36:07 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:07 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:07 policy-pap | sasl.login.callback.handler.class = null 18:36:07 policy-pap | sasl.login.class = null 18:36:07 policy-pap | sasl.login.connect.timeout.ms = null 18:36:07 policy-pap | sasl.login.read.timeout.ms = null 18:36:07 policy-pap | sasl.login.refresh.buffer.seconds = 300 18:36:07 policy-pap | sasl.login.refresh.min.period.seconds = 60 18:36:07 policy-pap | sasl.login.refresh.window.factor = 0.8 18:36:07 policy-pap | sasl.login.refresh.window.jitter = 0.05 18:36:07 policy-pap | sasl.login.retry.backoff.max.ms = 10000 18:36:07 policy-pap | sasl.login.retry.backoff.ms = 100 18:36:07 policy-pap | sasl.mechanism = GSSAPI 18:36:07 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 18:36:07 policy-pap | sasl.oauthbearer.expected.audience = null 18:36:07 policy-pap | sasl.oauthbearer.expected.issuer = null 18:36:07 policy-pap | sasl.oauthbearer.header.urlencode = false 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 18:36:07 policy-pap | sasl.oauthbearer.scope.claim.name = scope 18:36:07 policy-pap | sasl.oauthbearer.sub.claim.name = sub 18:36:07 policy-pap | sasl.oauthbearer.token.endpoint.url = null 18:36:07 policy-pap | security.protocol = PLAINTEXT 18:36:07 policy-pap | security.providers = null 18:36:07 policy-pap | send.buffer.bytes = 131072 18:36:07 policy-pap | session.timeout.ms = 45000 18:36:07 policy-pap | socket.connection.setup.timeout.max.ms = 30000 18:36:07 policy-pap | socket.connection.setup.timeout.ms = 10000 18:36:07 policy-pap | ssl.cipher.suites = null 18:36:07 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:07 policy-pap | ssl.endpoint.identification.algorithm = https 18:36:07 policy-pap | ssl.engine.factory.class = null 18:36:07 policy-pap | ssl.key.password = null 18:36:07 policy-pap | ssl.keymanager.algorithm = SunX509 18:36:07 policy-pap | ssl.keystore.certificate.chain = null 18:36:07 policy-pap | ssl.keystore.key = null 18:36:07 policy-pap | ssl.keystore.location = null 18:36:07 policy-pap | ssl.keystore.password = null 18:36:07 policy-pap | ssl.keystore.type = JKS 18:36:07 policy-pap | ssl.protocol = TLSv1.3 18:36:07 policy-pap | ssl.provider = null 18:36:07 policy-pap | ssl.secure.random.implementation = null 18:36:07 policy-pap | ssl.trustmanager.algorithm = PKIX 18:36:07 policy-pap | ssl.truststore.certificates = null 18:36:07 policy-pap | ssl.truststore.location = null 18:36:07 policy-pap | ssl.truststore.password = null 18:36:07 policy-pap | ssl.truststore.type = JKS 18:36:07 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:07 policy-pap | 18:36:07 policy-pap | [2025-06-15T18:33:14.936+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:07 policy-pap | [2025-06-15T18:33:14.943+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:07 policy-pap | [2025-06-15T18:33:14.943+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:07 policy-pap | [2025-06-15T18:33:14.943+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750012394943 18:36:07 policy-pap | [2025-06-15T18:33:14.943+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] Subscribed to topic(s): policy-pdp-pap 18:36:07 policy-pap | [2025-06-15T18:33:14.944+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 18:36:07 policy-pap | [2025-06-15T18:33:14.944+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=74d9bcb2-481b-4bc6-9d3a-3e58f27de5d3, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@15c8bb25 18:36:07 policy-pap | [2025-06-15T18:33:14.944+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=74d9bcb2-481b-4bc6-9d3a-3e58f27de5d3, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 18:36:07 policy-pap | [2025-06-15T18:33:14.944+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 18:36:07 policy-pap | allow.auto.create.topics = true 18:36:07 policy-pap | auto.commit.interval.ms = 5000 18:36:07 policy-pap | auto.include.jmx.reporter = true 18:36:07 policy-pap | auto.offset.reset = latest 18:36:07 policy-pap | bootstrap.servers = [kafka:9092] 18:36:07 policy-pap | check.crcs = true 18:36:07 policy-pap | client.dns.lookup = use_all_dns_ips 18:36:07 policy-pap | client.id = consumer-policy-pap-4 18:36:07 policy-pap | client.rack = 18:36:07 policy-pap | connections.max.idle.ms = 540000 18:36:07 policy-pap | default.api.timeout.ms = 60000 18:36:07 policy-pap | enable.auto.commit = true 18:36:07 policy-pap | enable.metrics.push = true 18:36:07 policy-pap | exclude.internal.topics = true 18:36:07 policy-pap | fetch.max.bytes = 52428800 18:36:07 policy-pap | fetch.max.wait.ms = 500 18:36:07 policy-pap | fetch.min.bytes = 1 18:36:07 policy-pap | group.id = policy-pap 18:36:07 policy-pap | group.instance.id = null 18:36:07 policy-pap | group.protocol = classic 18:36:07 policy-pap | group.remote.assignor = null 18:36:07 policy-pap | heartbeat.interval.ms = 3000 18:36:07 policy-pap | interceptor.classes = [] 18:36:07 policy-pap | internal.leave.group.on.close = true 18:36:07 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 18:36:07 policy-pap | isolation.level = read_uncommitted 18:36:07 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:07 policy-pap | max.partition.fetch.bytes = 1048576 18:36:07 policy-pap | max.poll.interval.ms = 300000 18:36:07 policy-pap | max.poll.records = 500 18:36:07 policy-pap | metadata.max.age.ms = 300000 18:36:07 policy-pap | metadata.recovery.strategy = none 18:36:07 policy-pap | metric.reporters = [] 18:36:07 policy-pap | metrics.num.samples = 2 18:36:07 policy-pap | metrics.recording.level = INFO 18:36:07 policy-pap | metrics.sample.window.ms = 30000 18:36:07 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 18:36:07 policy-pap | receive.buffer.bytes = 65536 18:36:07 policy-pap | reconnect.backoff.max.ms = 1000 18:36:07 policy-pap | reconnect.backoff.ms = 50 18:36:07 policy-pap | request.timeout.ms = 30000 18:36:07 policy-pap | retry.backoff.max.ms = 1000 18:36:07 policy-pap | retry.backoff.ms = 100 18:36:07 policy-pap | sasl.client.callback.handler.class = null 18:36:07 policy-pap | sasl.jaas.config = null 18:36:07 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:07 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 18:36:07 policy-pap | sasl.kerberos.service.name = null 18:36:07 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:07 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:07 policy-pap | sasl.login.callback.handler.class = null 18:36:07 policy-pap | sasl.login.class = null 18:36:07 policy-pap | sasl.login.connect.timeout.ms = null 18:36:07 policy-pap | sasl.login.read.timeout.ms = null 18:36:07 policy-pap | sasl.login.refresh.buffer.seconds = 300 18:36:07 policy-pap | sasl.login.refresh.min.period.seconds = 60 18:36:07 policy-pap | sasl.login.refresh.window.factor = 0.8 18:36:07 policy-pap | sasl.login.refresh.window.jitter = 0.05 18:36:07 policy-pap | sasl.login.retry.backoff.max.ms = 10000 18:36:07 policy-pap | sasl.login.retry.backoff.ms = 100 18:36:07 policy-pap | sasl.mechanism = GSSAPI 18:36:07 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 18:36:07 policy-pap | sasl.oauthbearer.expected.audience = null 18:36:07 policy-pap | sasl.oauthbearer.expected.issuer = null 18:36:07 policy-pap | sasl.oauthbearer.header.urlencode = false 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 18:36:07 policy-pap | sasl.oauthbearer.scope.claim.name = scope 18:36:07 policy-pap | sasl.oauthbearer.sub.claim.name = sub 18:36:07 policy-pap | sasl.oauthbearer.token.endpoint.url = null 18:36:07 policy-pap | security.protocol = PLAINTEXT 18:36:07 policy-pap | security.providers = null 18:36:07 policy-pap | send.buffer.bytes = 131072 18:36:07 policy-pap | session.timeout.ms = 45000 18:36:07 policy-pap | socket.connection.setup.timeout.max.ms = 30000 18:36:07 policy-pap | socket.connection.setup.timeout.ms = 10000 18:36:07 policy-pap | ssl.cipher.suites = null 18:36:07 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:07 policy-pap | ssl.endpoint.identification.algorithm = https 18:36:07 policy-pap | ssl.engine.factory.class = null 18:36:07 policy-pap | ssl.key.password = null 18:36:07 policy-pap | ssl.keymanager.algorithm = SunX509 18:36:07 policy-pap | ssl.keystore.certificate.chain = null 18:36:07 policy-pap | ssl.keystore.key = null 18:36:07 policy-pap | ssl.keystore.location = null 18:36:07 policy-pap | ssl.keystore.password = null 18:36:07 policy-pap | ssl.keystore.type = JKS 18:36:07 policy-pap | ssl.protocol = TLSv1.3 18:36:07 policy-pap | ssl.provider = null 18:36:07 policy-pap | ssl.secure.random.implementation = null 18:36:07 policy-pap | ssl.trustmanager.algorithm = PKIX 18:36:07 policy-pap | ssl.truststore.certificates = null 18:36:07 policy-pap | ssl.truststore.location = null 18:36:07 policy-pap | ssl.truststore.password = null 18:36:07 policy-pap | ssl.truststore.type = JKS 18:36:07 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:07 policy-pap | 18:36:07 policy-pap | [2025-06-15T18:33:14.944+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:07 policy-pap | [2025-06-15T18:33:14.950+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:07 policy-pap | [2025-06-15T18:33:14.950+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:07 policy-pap | [2025-06-15T18:33:14.950+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750012394950 18:36:07 policy-pap | [2025-06-15T18:33:14.951+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 18:36:07 policy-pap | [2025-06-15T18:33:14.951+00:00|INFO|ServiceManager|main] Policy PAP starting topics 18:36:07 policy-pap | [2025-06-15T18:33:14.951+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=74d9bcb2-481b-4bc6-9d3a-3e58f27de5d3, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 18:36:07 policy-pap | [2025-06-15T18:33:14.951+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=fa1957b5-0078-4a1f-ae83-bcab973764e3, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 18:36:07 policy-pap | [2025-06-15T18:33:14.951+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=1e33dcec-1f39-4991-9d43-9a29df6067da, alive=false, publisher=null]]: starting 18:36:07 policy-pap | [2025-06-15T18:33:14.962+00:00|INFO|ProducerConfig|main] ProducerConfig values: 18:36:07 policy-pap | acks = -1 18:36:07 policy-pap | auto.include.jmx.reporter = true 18:36:07 policy-pap | batch.size = 16384 18:36:07 policy-pap | bootstrap.servers = [kafka:9092] 18:36:07 policy-pap | buffer.memory = 33554432 18:36:07 policy-pap | client.dns.lookup = use_all_dns_ips 18:36:07 policy-pap | client.id = producer-1 18:36:07 policy-pap | compression.gzip.level = -1 18:36:07 policy-pap | compression.lz4.level = 9 18:36:07 policy-pap | compression.type = none 18:36:07 policy-pap | compression.zstd.level = 3 18:36:07 policy-pap | connections.max.idle.ms = 540000 18:36:07 policy-pap | delivery.timeout.ms = 120000 18:36:07 policy-pap | enable.idempotence = true 18:36:07 policy-pap | enable.metrics.push = true 18:36:07 policy-pap | interceptor.classes = [] 18:36:07 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 18:36:07 policy-pap | linger.ms = 0 18:36:07 policy-pap | max.block.ms = 60000 18:36:07 policy-pap | max.in.flight.requests.per.connection = 5 18:36:07 policy-pap | max.request.size = 1048576 18:36:07 policy-pap | metadata.max.age.ms = 300000 18:36:07 policy-pap | metadata.max.idle.ms = 300000 18:36:07 policy-pap | metadata.recovery.strategy = none 18:36:07 policy-pap | metric.reporters = [] 18:36:07 policy-pap | metrics.num.samples = 2 18:36:07 policy-pap | metrics.recording.level = INFO 18:36:07 policy-pap | metrics.sample.window.ms = 30000 18:36:07 policy-pap | partitioner.adaptive.partitioning.enable = true 18:36:07 policy-pap | partitioner.availability.timeout.ms = 0 18:36:07 policy-pap | partitioner.class = null 18:36:07 policy-pap | partitioner.ignore.keys = false 18:36:07 policy-pap | receive.buffer.bytes = 32768 18:36:07 policy-pap | reconnect.backoff.max.ms = 1000 18:36:07 policy-pap | reconnect.backoff.ms = 50 18:36:07 policy-pap | request.timeout.ms = 30000 18:36:07 policy-pap | retries = 2147483647 18:36:07 policy-pap | retry.backoff.max.ms = 1000 18:36:07 policy-pap | retry.backoff.ms = 100 18:36:07 policy-pap | sasl.client.callback.handler.class = null 18:36:07 policy-pap | sasl.jaas.config = null 18:36:07 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:07 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 18:36:07 policy-pap | sasl.kerberos.service.name = null 18:36:07 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:07 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:07 policy-pap | sasl.login.callback.handler.class = null 18:36:07 policy-pap | sasl.login.class = null 18:36:07 policy-pap | sasl.login.connect.timeout.ms = null 18:36:07 policy-pap | sasl.login.read.timeout.ms = null 18:36:07 policy-pap | sasl.login.refresh.buffer.seconds = 300 18:36:07 policy-pap | sasl.login.refresh.min.period.seconds = 60 18:36:07 policy-pap | sasl.login.refresh.window.factor = 0.8 18:36:07 policy-pap | sasl.login.refresh.window.jitter = 0.05 18:36:07 policy-pap | sasl.login.retry.backoff.max.ms = 10000 18:36:07 policy-pap | sasl.login.retry.backoff.ms = 100 18:36:07 policy-pap | sasl.mechanism = GSSAPI 18:36:07 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 18:36:07 policy-pap | sasl.oauthbearer.expected.audience = null 18:36:07 policy-pap | sasl.oauthbearer.expected.issuer = null 18:36:07 policy-pap | sasl.oauthbearer.header.urlencode = false 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 18:36:07 policy-pap | sasl.oauthbearer.scope.claim.name = scope 18:36:07 policy-pap | sasl.oauthbearer.sub.claim.name = sub 18:36:07 policy-pap | sasl.oauthbearer.token.endpoint.url = null 18:36:07 policy-pap | security.protocol = PLAINTEXT 18:36:07 policy-pap | security.providers = null 18:36:07 policy-pap | send.buffer.bytes = 131072 18:36:07 policy-pap | socket.connection.setup.timeout.max.ms = 30000 18:36:07 policy-pap | socket.connection.setup.timeout.ms = 10000 18:36:07 policy-pap | ssl.cipher.suites = null 18:36:07 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:07 policy-pap | ssl.endpoint.identification.algorithm = https 18:36:07 policy-pap | ssl.engine.factory.class = null 18:36:07 policy-pap | ssl.key.password = null 18:36:07 policy-pap | ssl.keymanager.algorithm = SunX509 18:36:07 policy-pap | ssl.keystore.certificate.chain = null 18:36:07 policy-pap | ssl.keystore.key = null 18:36:07 policy-pap | ssl.keystore.location = null 18:36:07 policy-pap | ssl.keystore.password = null 18:36:07 policy-pap | ssl.keystore.type = JKS 18:36:07 policy-pap | ssl.protocol = TLSv1.3 18:36:07 policy-pap | ssl.provider = null 18:36:07 policy-pap | ssl.secure.random.implementation = null 18:36:07 policy-pap | ssl.trustmanager.algorithm = PKIX 18:36:07 policy-pap | ssl.truststore.certificates = null 18:36:07 policy-pap | ssl.truststore.location = null 18:36:07 policy-pap | ssl.truststore.password = null 18:36:07 policy-pap | ssl.truststore.type = JKS 18:36:07 policy-pap | transaction.timeout.ms = 60000 18:36:07 policy-pap | transactional.id = null 18:36:07 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 18:36:07 policy-pap | 18:36:07 policy-pap | [2025-06-15T18:33:14.962+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:07 policy-pap | [2025-06-15T18:33:14.974+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 18:36:07 policy-pap | [2025-06-15T18:33:14.989+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:07 policy-pap | [2025-06-15T18:33:14.989+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:07 policy-pap | [2025-06-15T18:33:14.989+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750012394989 18:36:07 policy-pap | [2025-06-15T18:33:14.989+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=1e33dcec-1f39-4991-9d43-9a29df6067da, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 18:36:07 policy-pap | [2025-06-15T18:33:14.989+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=64b49b4f-f3d5-436d-9938-df1861ed2587, alive=false, publisher=null]]: starting 18:36:07 policy-pap | [2025-06-15T18:33:14.990+00:00|INFO|ProducerConfig|main] ProducerConfig values: 18:36:07 policy-pap | acks = -1 18:36:07 policy-pap | auto.include.jmx.reporter = true 18:36:07 policy-pap | batch.size = 16384 18:36:07 policy-pap | bootstrap.servers = [kafka:9092] 18:36:07 policy-pap | buffer.memory = 33554432 18:36:07 policy-pap | client.dns.lookup = use_all_dns_ips 18:36:07 policy-pap | client.id = producer-2 18:36:07 policy-pap | compression.gzip.level = -1 18:36:07 policy-pap | compression.lz4.level = 9 18:36:07 policy-pap | compression.type = none 18:36:07 policy-pap | compression.zstd.level = 3 18:36:07 policy-pap | connections.max.idle.ms = 540000 18:36:07 policy-pap | delivery.timeout.ms = 120000 18:36:07 policy-pap | enable.idempotence = true 18:36:07 policy-pap | enable.metrics.push = true 18:36:07 policy-pap | interceptor.classes = [] 18:36:07 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 18:36:07 policy-pap | linger.ms = 0 18:36:07 policy-pap | max.block.ms = 60000 18:36:07 policy-pap | max.in.flight.requests.per.connection = 5 18:36:07 policy-pap | max.request.size = 1048576 18:36:07 policy-pap | metadata.max.age.ms = 300000 18:36:07 policy-pap | metadata.max.idle.ms = 300000 18:36:07 policy-pap | metadata.recovery.strategy = none 18:36:07 policy-pap | metric.reporters = [] 18:36:07 policy-pap | metrics.num.samples = 2 18:36:07 policy-pap | metrics.recording.level = INFO 18:36:07 policy-pap | metrics.sample.window.ms = 30000 18:36:07 policy-pap | partitioner.adaptive.partitioning.enable = true 18:36:07 policy-pap | partitioner.availability.timeout.ms = 0 18:36:07 policy-pap | partitioner.class = null 18:36:07 policy-pap | partitioner.ignore.keys = false 18:36:07 policy-pap | receive.buffer.bytes = 32768 18:36:07 policy-pap | reconnect.backoff.max.ms = 1000 18:36:07 policy-pap | reconnect.backoff.ms = 50 18:36:07 policy-pap | request.timeout.ms = 30000 18:36:07 policy-pap | retries = 2147483647 18:36:07 policy-pap | retry.backoff.max.ms = 1000 18:36:07 policy-pap | retry.backoff.ms = 100 18:36:07 policy-pap | sasl.client.callback.handler.class = null 18:36:07 policy-pap | sasl.jaas.config = null 18:36:07 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:07 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 18:36:07 policy-pap | sasl.kerberos.service.name = null 18:36:07 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:07 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:07 policy-pap | sasl.login.callback.handler.class = null 18:36:07 policy-pap | sasl.login.class = null 18:36:07 policy-pap | sasl.login.connect.timeout.ms = null 18:36:07 policy-pap | sasl.login.read.timeout.ms = null 18:36:07 policy-pap | sasl.login.refresh.buffer.seconds = 300 18:36:07 policy-pap | sasl.login.refresh.min.period.seconds = 60 18:36:07 policy-pap | sasl.login.refresh.window.factor = 0.8 18:36:07 policy-pap | sasl.login.refresh.window.jitter = 0.05 18:36:07 policy-pap | sasl.login.retry.backoff.max.ms = 10000 18:36:07 policy-pap | sasl.login.retry.backoff.ms = 100 18:36:07 policy-pap | sasl.mechanism = GSSAPI 18:36:07 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 18:36:07 policy-pap | sasl.oauthbearer.expected.audience = null 18:36:07 policy-pap | sasl.oauthbearer.expected.issuer = null 18:36:07 policy-pap | sasl.oauthbearer.header.urlencode = false 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:07 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 18:36:07 policy-pap | sasl.oauthbearer.scope.claim.name = scope 18:36:07 policy-pap | sasl.oauthbearer.sub.claim.name = sub 18:36:07 policy-pap | sasl.oauthbearer.token.endpoint.url = null 18:36:07 policy-pap | security.protocol = PLAINTEXT 18:36:07 policy-pap | security.providers = null 18:36:07 policy-pap | send.buffer.bytes = 131072 18:36:07 policy-pap | socket.connection.setup.timeout.max.ms = 30000 18:36:07 policy-pap | socket.connection.setup.timeout.ms = 10000 18:36:07 policy-pap | ssl.cipher.suites = null 18:36:07 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:07 policy-pap | ssl.endpoint.identification.algorithm = https 18:36:07 policy-pap | ssl.engine.factory.class = null 18:36:07 policy-pap | ssl.key.password = null 18:36:07 policy-pap | ssl.keymanager.algorithm = SunX509 18:36:07 policy-pap | ssl.keystore.certificate.chain = null 18:36:07 policy-pap | ssl.keystore.key = null 18:36:07 policy-pap | ssl.keystore.location = null 18:36:07 policy-pap | ssl.keystore.password = null 18:36:07 policy-pap | ssl.keystore.type = JKS 18:36:07 policy-pap | ssl.protocol = TLSv1.3 18:36:07 policy-pap | ssl.provider = null 18:36:07 policy-pap | ssl.secure.random.implementation = null 18:36:07 policy-pap | ssl.trustmanager.algorithm = PKIX 18:36:07 policy-pap | ssl.truststore.certificates = null 18:36:07 policy-pap | ssl.truststore.location = null 18:36:07 policy-pap | ssl.truststore.password = null 18:36:07 policy-pap | ssl.truststore.type = JKS 18:36:07 policy-pap | transaction.timeout.ms = 60000 18:36:07 policy-pap | transactional.id = null 18:36:07 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 18:36:07 policy-pap | 18:36:07 policy-pap | [2025-06-15T18:33:14.990+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:07 policy-pap | [2025-06-15T18:33:14.990+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 18:36:07 policy-pap | [2025-06-15T18:33:14.994+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:07 policy-pap | [2025-06-15T18:33:14.994+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:07 policy-pap | [2025-06-15T18:33:14.994+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750012394994 18:36:07 policy-pap | [2025-06-15T18:33:14.994+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=64b49b4f-f3d5-436d-9938-df1861ed2587, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 18:36:07 policy-pap | [2025-06-15T18:33:14.994+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 18:36:07 policy-pap | [2025-06-15T18:33:14.994+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 18:36:07 policy-pap | [2025-06-15T18:33:14.995+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 18:36:07 policy-pap | [2025-06-15T18:33:14.996+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 18:36:07 policy-pap | [2025-06-15T18:33:14.999+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 18:36:07 policy-pap | [2025-06-15T18:33:15.000+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 18:36:07 policy-pap | [2025-06-15T18:33:15.000+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 18:36:07 policy-pap | [2025-06-15T18:33:15.000+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 18:36:07 policy-pap | [2025-06-15T18:33:15.001+00:00|INFO|TimerManager|Thread-9] timer manager update started 18:36:07 policy-pap | [2025-06-15T18:33:15.002+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 18:36:07 policy-pap | [2025-06-15T18:33:15.002+00:00|INFO|ServiceManager|main] Policy PAP started 18:36:07 policy-pap | [2025-06-15T18:33:15.003+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.85 seconds (process running for 10.484) 18:36:07 policy-pap | [2025-06-15T18:33:15.446+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: y4Rm7-C7SZiMksPpYccgvw 18:36:07 policy-pap | [2025-06-15T18:33:15.447+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: y4Rm7-C7SZiMksPpYccgvw 18:36:07 policy-pap | [2025-06-15T18:33:15.448+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 18:36:07 policy-pap | [2025-06-15T18:33:15.448+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] Cluster ID: y4Rm7-C7SZiMksPpYccgvw 18:36:07 policy-pap | [2025-06-15T18:33:15.525+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 18:36:07 policy-pap | [2025-06-15T18:33:15.526+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 18:36:07 policy-pap | [2025-06-15T18:33:15.527+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 18:36:07 policy-pap | [2025-06-15T18:33:15.527+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: y4Rm7-C7SZiMksPpYccgvw 18:36:07 policy-pap | [2025-06-15T18:33:15.656+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 18:36:07 policy-pap | [2025-06-15T18:33:15.658+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 18:36:07 policy-pap | [2025-06-15T18:33:17.026+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 18:36:07 policy-pap | [2025-06-15T18:33:17.036+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] (Re-)joining group 18:36:07 policy-pap | [2025-06-15T18:33:17.066+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] Request joining group due to: need to re-join with the given member-id: consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3-1ce6a178-25c6-4671-99fc-a5d2d32fa8e9 18:36:07 policy-pap | [2025-06-15T18:33:17.066+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] (Re-)joining group 18:36:07 policy-pap | [2025-06-15T18:33:17.256+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 18:36:07 policy-pap | [2025-06-15T18:33:17.261+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 18:36:07 policy-pap | [2025-06-15T18:33:17.269+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-63f2110d-42d3-428c-997b-d65c58061d50 18:36:07 policy-pap | [2025-06-15T18:33:17.270+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 18:36:07 policy-pap | [2025-06-15T18:33:20.098+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] Successfully joined group with generation Generation{generationId=1, memberId='consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3-1ce6a178-25c6-4671-99fc-a5d2d32fa8e9', protocol='range'} 18:36:07 policy-pap | [2025-06-15T18:33:20.108+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] Finished assignment for group at generation 1: {consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3-1ce6a178-25c6-4671-99fc-a5d2d32fa8e9=Assignment(partitions=[policy-pdp-pap-0])} 18:36:07 policy-pap | [2025-06-15T18:33:20.130+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] Successfully synced group in generation Generation{generationId=1, memberId='consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3-1ce6a178-25c6-4671-99fc-a5d2d32fa8e9', protocol='range'} 18:36:07 policy-pap | [2025-06-15T18:33:20.131+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 18:36:07 policy-pap | [2025-06-15T18:33:20.133+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] Adding newly assigned partitions: policy-pdp-pap-0 18:36:07 policy-pap | [2025-06-15T18:33:20.144+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] Found no committed offset for partition policy-pdp-pap-0 18:36:07 policy-pap | [2025-06-15T18:33:20.161+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fa1957b5-0078-4a1f-ae83-bcab973764e3-3, groupId=fa1957b5-0078-4a1f-ae83-bcab973764e3] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 18:36:07 policy-pap | [2025-06-15T18:33:20.274+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-63f2110d-42d3-428c-997b-d65c58061d50', protocol='range'} 18:36:07 policy-pap | [2025-06-15T18:33:20.275+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-63f2110d-42d3-428c-997b-d65c58061d50=Assignment(partitions=[policy-pdp-pap-0])} 18:36:07 policy-pap | [2025-06-15T18:33:20.280+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-63f2110d-42d3-428c-997b-d65c58061d50', protocol='range'} 18:36:07 policy-pap | [2025-06-15T18:33:20.280+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 18:36:07 policy-pap | [2025-06-15T18:33:20.280+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 18:36:07 policy-pap | [2025-06-15T18:33:20.282+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 18:36:07 policy-pap | [2025-06-15T18:33:20.284+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 18:36:07 policy-pap | [2025-06-15T18:33:21.294+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 18:36:07 policy-pap | [] 18:36:07 policy-pap | [2025-06-15T18:33:21.294+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"59cb1196-80dd-40cb-8438-ae0650afa48f","timestampMs":1750012396791,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f"} 18:36:07 policy-pap | [2025-06-15T18:33:21.294+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"59cb1196-80dd-40cb-8438-ae0650afa48f","timestampMs":1750012396791,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f"} 18:36:07 policy-pap | [2025-06-15T18:33:21.297+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK 18:36:07 policy-pap | [2025-06-15T18:33:21.297+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_TOPIC_CHECK 18:36:07 policy-pap | [2025-06-15T18:33:21.351+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"ddacd0a5-e547-476c-a7cb-6000253400d0","timestampMs":1750012401302,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup"} 18:36:07 policy-pap | [2025-06-15T18:33:21.351+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"ddacd0a5-e547-476c-a7cb-6000253400d0","timestampMs":1750012401302,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup"} 18:36:07 policy-pap | [2025-06-15T18:33:21.356+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 18:36:07 policy-pap | [2025-06-15T18:33:22.032+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting 18:36:07 policy-pap | [2025-06-15T18:33:22.032+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting listener 18:36:07 policy-pap | [2025-06-15T18:33:22.032+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting timer 18:36:07 policy-pap | [2025-06-15T18:33:22.033+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=2bcbbf1d-f67f-476c-bca6-85781a33e3c1, expireMs=1750012432033] 18:36:07 policy-pap | [2025-06-15T18:33:22.034+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting enqueue 18:36:07 policy-pap | [2025-06-15T18:33:22.034+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=2bcbbf1d-f67f-476c-bca6-85781a33e3c1, expireMs=1750012432033] 18:36:07 policy-pap | [2025-06-15T18:33:22.035+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate started 18:36:07 policy-pap | [2025-06-15T18:33:22.038+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"2bcbbf1d-f67f-476c-bca6-85781a33e3c1","timestampMs":1750012402007,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.105+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"2bcbbf1d-f67f-476c-bca6-85781a33e3c1","timestampMs":1750012402007,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.106+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 18:36:07 policy-pap | [2025-06-15T18:33:22.125+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"2bcbbf1d-f67f-476c-bca6-85781a33e3c1","timestampMs":1750012402007,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.127+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 18:36:07 policy-pap | [2025-06-15T18:33:22.243+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"2bcbbf1d-f67f-476c-bca6-85781a33e3c1","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"c5434883-83d8-4d0d-85b8-dace443a1c6b","timestampMs":1750012402217,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.243+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"2bcbbf1d-f67f-476c-bca6-85781a33e3c1","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"c5434883-83d8-4d0d-85b8-dace443a1c6b","timestampMs":1750012402217,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.243+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 2bcbbf1d-f67f-476c-bca6-85781a33e3c1 18:36:07 policy-pap | [2025-06-15T18:33:22.244+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping 18:36:07 policy-pap | [2025-06-15T18:33:22.244+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping enqueue 18:36:07 policy-pap | [2025-06-15T18:33:22.244+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping timer 18:36:07 policy-pap | [2025-06-15T18:33:22.244+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=2bcbbf1d-f67f-476c-bca6-85781a33e3c1, expireMs=1750012432033] 18:36:07 policy-pap | [2025-06-15T18:33:22.245+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping listener 18:36:07 policy-pap | [2025-06-15T18:33:22.245+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopped 18:36:07 policy-pap | [2025-06-15T18:33:22.249+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"44d556b1-c408-48c6-85fd-a53c71fd7ad9","timestampMs":1750012402227,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.283+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate successful 18:36:07 policy-pap | [2025-06-15T18:33:22.283+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f start publishing next request 18:36:07 policy-pap | [2025-06-15T18:33:22.283+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpStateChange starting 18:36:07 policy-pap | [2025-06-15T18:33:22.283+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpStateChange starting listener 18:36:07 policy-pap | [2025-06-15T18:33:22.283+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpStateChange starting timer 18:36:07 policy-pap | [2025-06-15T18:33:22.283+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=bbc65358-d1d0-4084-b9ab-42951b7c0e24, expireMs=1750012432283] 18:36:07 policy-pap | [2025-06-15T18:33:22.283+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpStateChange starting enqueue 18:36:07 policy-pap | [2025-06-15T18:33:22.283+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=bbc65358-d1d0-4084-b9ab-42951b7c0e24, expireMs=1750012432283] 18:36:07 policy-pap | [2025-06-15T18:33:22.284+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"bbc65358-d1d0-4084-b9ab-42951b7c0e24","timestampMs":1750012402008,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.284+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 18:36:07 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.Naming","policy-type-version":"1.0.0","policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 18:36:07 policy-pap | [2025-06-15T18:33:22.284+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpStateChange started 18:36:07 policy-pap | [2025-06-15T18:33:22.310+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} 18:36:07 policy-pap | [2025-06-15T18:33:22.630+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"44d556b1-c408-48c6-85fd-a53c71fd7ad9","timestampMs":1750012402227,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.631+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 18:36:07 policy-pap | [2025-06-15T18:33:22.639+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"bbc65358-d1d0-4084-b9ab-42951b7c0e24","timestampMs":1750012402008,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.639+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 18:36:07 policy-pap | [2025-06-15T18:33:22.639+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"bbc65358-d1d0-4084-b9ab-42951b7c0e24","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"4863b4e9-9446-4008-b1a6-205a759de7cc","timestampMs":1750012402303,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.897+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpStateChange stopping 18:36:07 policy-pap | [2025-06-15T18:33:22.897+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpStateChange stopping enqueue 18:36:07 policy-pap | [2025-06-15T18:33:22.897+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpStateChange stopping timer 18:36:07 policy-pap | [2025-06-15T18:33:22.897+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=bbc65358-d1d0-4084-b9ab-42951b7c0e24, expireMs=1750012432283] 18:36:07 policy-pap | [2025-06-15T18:33:22.897+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpStateChange stopping listener 18:36:07 policy-pap | [2025-06-15T18:33:22.897+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpStateChange stopped 18:36:07 policy-pap | [2025-06-15T18:33:22.897+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpStateChange successful 18:36:07 policy-pap | [2025-06-15T18:33:22.897+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f start publishing next request 18:36:07 policy-pap | [2025-06-15T18:33:22.897+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting 18:36:07 policy-pap | [2025-06-15T18:33:22.898+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting listener 18:36:07 policy-pap | [2025-06-15T18:33:22.898+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting timer 18:36:07 policy-pap | [2025-06-15T18:33:22.898+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=7c37b551-e594-4cd3-9436-e93d096d1bb9, expireMs=1750012432898] 18:36:07 policy-pap | [2025-06-15T18:33:22.898+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting enqueue 18:36:07 policy-pap | [2025-06-15T18:33:22.898+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate started 18:36:07 policy-pap | [2025-06-15T18:33:22.898+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7c37b551-e594-4cd3-9436-e93d096d1bb9","timestampMs":1750012402616,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.906+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"bbc65358-d1d0-4084-b9ab-42951b7c0e24","timestampMs":1750012402008,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.908+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 18:36:07 policy-pap | [2025-06-15T18:33:22.908+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7c37b551-e594-4cd3-9436-e93d096d1bb9","timestampMs":1750012402616,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.908+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 18:36:07 policy-pap | [2025-06-15T18:33:22.911+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"bbc65358-d1d0-4084-b9ab-42951b7c0e24","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"4863b4e9-9446-4008-b1a6-205a759de7cc","timestampMs":1750012402303,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.911+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id bbc65358-d1d0-4084-b9ab-42951b7c0e24 18:36:07 policy-pap | [2025-06-15T18:33:22.917+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7c37b551-e594-4cd3-9436-e93d096d1bb9","timestampMs":1750012402616,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.917+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 18:36:07 policy-pap | [2025-06-15T18:33:22.920+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"7c37b551-e594-4cd3-9436-e93d096d1bb9","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"5f3766ab-697c-4621-97ba-1f6ca2555905","timestampMs":1750012402911,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.921+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping 18:36:07 policy-pap | [2025-06-15T18:33:22.921+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping enqueue 18:36:07 policy-pap | [2025-06-15T18:33:22.921+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping timer 18:36:07 policy-pap | [2025-06-15T18:33:22.921+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=7c37b551-e594-4cd3-9436-e93d096d1bb9, expireMs=1750012432898] 18:36:07 policy-pap | [2025-06-15T18:33:22.921+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping listener 18:36:07 policy-pap | [2025-06-15T18:33:22.921+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopped 18:36:07 policy-pap | [2025-06-15T18:33:22.922+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"7c37b551-e594-4cd3-9436-e93d096d1bb9","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"5f3766ab-697c-4621-97ba-1f6ca2555905","timestampMs":1750012402911,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:33:22.923+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 7c37b551-e594-4cd3-9436-e93d096d1bb9 18:36:07 policy-pap | [2025-06-15T18:33:22.926+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate successful 18:36:07 policy-pap | [2025-06-15T18:33:22.926+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f has no more requests 18:36:07 policy-pap | [2025-06-15T18:33:41.611+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 18:36:07 policy-pap | [2025-06-15T18:33:41.611+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 18:36:07 policy-pap | [2025-06-15T18:33:41.612+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 18:36:07 policy-pap | [2025-06-15T18:33:52.034+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=2bcbbf1d-f67f-476c-bca6-85781a33e3c1, expireMs=1750012432033] 18:36:07 policy-pap | [2025-06-15T18:33:52.284+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=bbc65358-d1d0-4084-b9ab-42951b7c0e24, expireMs=1750012432283] 18:36:07 policy-pap | [2025-06-15T18:34:39.717+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group defaultGroup 18:36:07 policy-pap | [2025-06-15T18:34:39.719+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy onap.restart.tca 1.0.0 to subgroup defaultGroup xacml count=2 18:36:07 policy-pap | [2025-06-15T18:34:39.719+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy onap.restart.tca 1.0.0 18:36:07 policy-pap | [2025-06-15T18:34:39.720+00:00|INFO|SessionData|http-nio-6969-exec-3] add update xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f defaultGroup xacml policies=1 18:36:07 policy-pap | [2025-06-15T18:34:39.721+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group defaultGroup 18:36:07 policy-pap | [2025-06-15T18:34:39.760+00:00|INFO|SessionData|http-nio-6969-exec-3] use cached group defaultGroup 18:36:07 policy-pap | [2025-06-15T18:34:39.760+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy OSDF_CASABLANCA.Affinity_Default 1.0.0 to subgroup defaultGroup xacml count=3 18:36:07 policy-pap | [2025-06-15T18:34:39.761+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy OSDF_CASABLANCA.Affinity_Default 1.0.0 18:36:07 policy-pap | [2025-06-15T18:34:39.761+00:00|INFO|SessionData|http-nio-6969-exec-3] add update xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f defaultGroup xacml policies=2 18:36:07 policy-pap | [2025-06-15T18:34:39.761+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group defaultGroup 18:36:07 policy-pap | [2025-06-15T18:34:39.762+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group defaultGroup 18:36:07 policy-pap | [2025-06-15T18:34:39.779+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-15T18:34:39Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=OSDF_CASABLANCA.Affinity_Default 1.0.0, action=DEPLOYMENT, timestamp=2025-06-15T18:34:39Z, user=policyadmin)] 18:36:07 policy-pap | [2025-06-15T18:34:39.835+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting 18:36:07 policy-pap | [2025-06-15T18:34:39.835+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting listener 18:36:07 policy-pap | [2025-06-15T18:34:39.835+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting timer 18:36:07 policy-pap | [2025-06-15T18:34:39.835+00:00|INFO|TimerManager|http-nio-6969-exec-3] update timer registered Timer [name=20fcec9c-0e98-48c5-b4b3-b726a44b58b4, expireMs=1750012509835] 18:36:07 policy-pap | [2025-06-15T18:34:39.836+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting enqueue 18:36:07 policy-pap | [2025-06-15T18:34:39.836+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=20fcec9c-0e98-48c5-b4b3-b726a44b58b4, expireMs=1750012509835] 18:36:07 policy-pap | [2025-06-15T18:34:39.836+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate started 18:36:07 policy-pap | [2025-06-15T18:34:39.837+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"20fcec9c-0e98-48c5-b4b3-b726a44b58b4","timestampMs":1750012479761,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:34:39.846+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"20fcec9c-0e98-48c5-b4b3-b726a44b58b4","timestampMs":1750012479761,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:34:39.846+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 18:36:07 policy-pap | [2025-06-15T18:34:39.862+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"20fcec9c-0e98-48c5-b4b3-b726a44b58b4","timestampMs":1750012479761,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:34:39.862+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 18:36:07 policy-pap | [2025-06-15T18:34:40.463+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"20fcec9c-0e98-48c5-b4b3-b726a44b58b4","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"e9b3e87d-ebc9-4b81-8fd3-0014200a4b83","timestampMs":1750012480448,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:34:40.463+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 20fcec9c-0e98-48c5-b4b3-b726a44b58b4 18:36:07 policy-pap | [2025-06-15T18:34:40.474+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"20fcec9c-0e98-48c5-b4b3-b726a44b58b4","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"e9b3e87d-ebc9-4b81-8fd3-0014200a4b83","timestampMs":1750012480448,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:34:40.474+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping 18:36:07 policy-pap | [2025-06-15T18:34:40.474+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping enqueue 18:36:07 policy-pap | [2025-06-15T18:34:40.474+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping timer 18:36:07 policy-pap | [2025-06-15T18:34:40.474+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=20fcec9c-0e98-48c5-b4b3-b726a44b58b4, expireMs=1750012509835] 18:36:07 policy-pap | [2025-06-15T18:34:40.474+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping listener 18:36:07 policy-pap | [2025-06-15T18:34:40.474+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopped 18:36:07 policy-pap | [2025-06-15T18:34:40.490+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate successful 18:36:07 policy-pap | [2025-06-15T18:34:40.490+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f has no more requests 18:36:07 policy-pap | [2025-06-15T18:34:40.491+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 18:36:07 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0},{"policy-type":"onap.policies.optimization.resource.AffinityPolicy","policy-type-version":"1.0.0","policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 18:36:07 policy-pap | [2025-06-15T18:35:04.539+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 18:36:07 policy-pap | [2025-06-15T18:35:04.541+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup defaultGroup xacml count=2 18:36:07 policy-pap | [2025-06-15T18:35:04.541+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 18:36:07 policy-pap | [2025-06-15T18:35:04.541+00:00|INFO|SessionData|http-nio-6969-exec-5] add update xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f defaultGroup xacml policies=0 18:36:07 policy-pap | [2025-06-15T18:35:04.541+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group defaultGroup 18:36:07 policy-pap | [2025-06-15T18:35:04.541+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group defaultGroup 18:36:07 policy-pap | [2025-06-15T18:35:04.556+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-15T18:35:04Z, user=policyadmin)] 18:36:07 policy-pap | [2025-06-15T18:35:04.570+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting 18:36:07 policy-pap | [2025-06-15T18:35:04.570+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting listener 18:36:07 policy-pap | [2025-06-15T18:35:04.570+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting timer 18:36:07 policy-pap | [2025-06-15T18:35:04.570+00:00|INFO|TimerManager|http-nio-6969-exec-5] update timer registered Timer [name=f580fdd1-64fd-40bb-a9c6-117e2e9171ce, expireMs=1750012534570] 18:36:07 policy-pap | [2025-06-15T18:35:04.570+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate starting enqueue 18:36:07 policy-pap | [2025-06-15T18:35:04.571+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"f580fdd1-64fd-40bb-a9c6-117e2e9171ce","timestampMs":1750012504541,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:35:04.571+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate started 18:36:07 policy-pap | [2025-06-15T18:35:04.578+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"f580fdd1-64fd-40bb-a9c6-117e2e9171ce","timestampMs":1750012504541,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:35:04.578+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 18:36:07 policy-pap | [2025-06-15T18:35:04.579+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"f580fdd1-64fd-40bb-a9c6-117e2e9171ce","timestampMs":1750012504541,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:35:04.579+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 18:36:07 policy-pap | [2025-06-15T18:35:04.592+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"f580fdd1-64fd-40bb-a9c6-117e2e9171ce","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"8b0c8a16-87af-47f3-8269-e085ca869e1f","timestampMs":1750012504583,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:35:04.592+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"f580fdd1-64fd-40bb-a9c6-117e2e9171ce","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"8b0c8a16-87af-47f3-8269-e085ca869e1f","timestampMs":1750012504583,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:35:04.592+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f580fdd1-64fd-40bb-a9c6-117e2e9171ce 18:36:07 policy-pap | [2025-06-15T18:35:04.593+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping 18:36:07 policy-pap | [2025-06-15T18:35:04.593+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping enqueue 18:36:07 policy-pap | [2025-06-15T18:35:04.593+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping timer 18:36:07 policy-pap | [2025-06-15T18:35:04.593+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=f580fdd1-64fd-40bb-a9c6-117e2e9171ce, expireMs=1750012534570] 18:36:07 policy-pap | [2025-06-15T18:35:04.593+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopping listener 18:36:07 policy-pap | [2025-06-15T18:35:04.593+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate stopped 18:36:07 policy-pap | [2025-06-15T18:35:04.612+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f PdpUpdate successful 18:36:07 policy-pap | [2025-06-15T18:35:04.612+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 18:36:07 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}]} 18:36:07 policy-pap | [2025-06-15T18:35:04.612+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f has no more requests 18:36:07 policy-pap | [2025-06-15T18:35:09.835+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=20fcec9c-0e98-48c5-b4b3-b726a44b58b4, expireMs=1750012509835] 18:36:07 policy-pap | [2025-06-15T18:35:15.001+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 18:36:07 policy-pap | [2025-06-15T18:35:22.248+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:07 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"f1a83c11-d55e-46b8-8140-e2781af980b9","timestampMs":1750012522238,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:35:22.249+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 18:36:07 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"f1a83c11-d55e-46b8-8140-e2781af980b9","timestampMs":1750012522238,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:07 policy-pap | [2025-06-15T18:35:22.249+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 18:36:08 policy-xacml-pdp | Waiting for pap port 6969... 18:36:08 policy-xacml-pdp | pap (172.17.0.9:6969) open 18:36:08 policy-xacml-pdp | Waiting for kafka port 9092... 18:36:08 policy-xacml-pdp | kafka (172.17.0.6:9092) open 18:36:08 policy-xacml-pdp | + KEYSTORE=/opt/app/policy/pdpx/etc/ssl/policy-keystore 18:36:08 policy-xacml-pdp | + TRUSTSTORE=/opt/app/policy/pdpx/etc/ssl/policy-truststore 18:36:08 policy-xacml-pdp | + KEYSTORE_PASSWD=Pol1cy_0nap 18:36:08 policy-xacml-pdp | + TRUSTSTORE_PASSWD=Pol1cy_0nap 18:36:08 policy-xacml-pdp | + '[' 0 -ge 1 ] 18:36:08 policy-xacml-pdp | + CONFIG_FILE= 18:36:08 policy-xacml-pdp | + '[' -z ] 18:36:08 policy-xacml-pdp | + CONFIG_FILE=/opt/app/policy/pdpx/etc/defaultConfig.json 18:36:08 policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-truststore ] 18:36:08 policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-keystore ] 18:36:08 policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/xacml.properties ] 18:36:08 policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/logback.xml ] 18:36:08 policy-xacml-pdp | + echo 'Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json' 18:36:08 policy-xacml-pdp | Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json 18:36:08 policy-xacml-pdp | + /usr/lib/jvm/default-jvm/bin/java -cp '/opt/app/policy/pdpx/etc:/opt/app/policy/pdpx/lib/*' '-Dlogback.configurationFile=/opt/app/policy/pdpx/etc/logback.xml' '-Djavax.net.ssl.keyStore=/opt/app/policy/pdpx/etc/ssl/policy-keystore' '-Djavax.net.ssl.keyStorePassword=Pol1cy_0nap' '-Djavax.net.ssl.trustStore=/opt/app/policy/pdpx/etc/ssl/policy-truststore' '-Djavax.net.ssl.trustStorePassword=Pol1cy_0nap' org.onap.policy.pdpx.main.startstop.Main -c /opt/app/policy/pdpx/etc/defaultConfig.json 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.019+00:00|INFO|Main|main] Starting policy xacml pdp service with arguments - [-c, /opt/app/policy/pdpx/etc/defaultConfig.json] 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.110+00:00|INFO|XacmlPdpActivator|main] Activator initializing using org.onap.policy.pdpx.main.parameters.XacmlPdpParameterGroup@37858383 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.159+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 18:36:08 policy-xacml-pdp | allow.auto.create.topics = true 18:36:08 policy-xacml-pdp | auto.commit.interval.ms = 5000 18:36:08 policy-xacml-pdp | auto.include.jmx.reporter = true 18:36:08 policy-xacml-pdp | auto.offset.reset = latest 18:36:08 policy-xacml-pdp | bootstrap.servers = [kafka:9092] 18:36:08 policy-xacml-pdp | check.crcs = true 18:36:08 policy-xacml-pdp | client.dns.lookup = use_all_dns_ips 18:36:08 policy-xacml-pdp | client.id = consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-1 18:36:08 policy-xacml-pdp | client.rack = 18:36:08 policy-xacml-pdp | connections.max.idle.ms = 540000 18:36:08 policy-xacml-pdp | default.api.timeout.ms = 60000 18:36:08 policy-xacml-pdp | enable.auto.commit = true 18:36:08 policy-xacml-pdp | enable.metrics.push = true 18:36:08 policy-xacml-pdp | exclude.internal.topics = true 18:36:08 policy-xacml-pdp | fetch.max.bytes = 52428800 18:36:08 policy-xacml-pdp | fetch.max.wait.ms = 500 18:36:08 policy-xacml-pdp | fetch.min.bytes = 1 18:36:08 policy-xacml-pdp | group.id = 38fd38ff-592e-4c56-927f-cdd1f27311ce 18:36:08 policy-xacml-pdp | group.instance.id = null 18:36:08 policy-xacml-pdp | group.protocol = classic 18:36:08 policy-xacml-pdp | group.remote.assignor = null 18:36:08 policy-xacml-pdp | heartbeat.interval.ms = 3000 18:36:08 policy-xacml-pdp | interceptor.classes = [] 18:36:08 policy-xacml-pdp | internal.leave.group.on.close = true 18:36:08 policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 18:36:08 policy-xacml-pdp | isolation.level = read_uncommitted 18:36:08 policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:08 policy-xacml-pdp | max.partition.fetch.bytes = 1048576 18:36:08 policy-xacml-pdp | max.poll.interval.ms = 300000 18:36:08 policy-xacml-pdp | max.poll.records = 500 18:36:08 policy-xacml-pdp | metadata.max.age.ms = 300000 18:36:08 policy-xacml-pdp | metadata.recovery.strategy = none 18:36:08 policy-xacml-pdp | metric.reporters = [] 18:36:08 policy-xacml-pdp | metrics.num.samples = 2 18:36:08 policy-xacml-pdp | metrics.recording.level = INFO 18:36:08 policy-xacml-pdp | metrics.sample.window.ms = 30000 18:36:08 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 18:36:08 policy-xacml-pdp | receive.buffer.bytes = 65536 18:36:08 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 18:36:08 policy-xacml-pdp | reconnect.backoff.ms = 50 18:36:08 policy-xacml-pdp | request.timeout.ms = 30000 18:36:08 policy-xacml-pdp | retry.backoff.max.ms = 1000 18:36:08 policy-xacml-pdp | retry.backoff.ms = 100 18:36:08 policy-xacml-pdp | sasl.client.callback.handler.class = null 18:36:08 policy-xacml-pdp | sasl.jaas.config = null 18:36:08 policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:08 policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 18:36:08 policy-xacml-pdp | sasl.kerberos.service.name = null 18:36:08 policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:08 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:08 policy-xacml-pdp | sasl.login.callback.handler.class = null 18:36:08 policy-xacml-pdp | sasl.login.class = null 18:36:08 policy-xacml-pdp | sasl.login.connect.timeout.ms = null 18:36:08 policy-xacml-pdp | sasl.login.read.timeout.ms = null 18:36:08 policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 18:36:08 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 18:36:08 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 18:36:08 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 18:36:08 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 18:36:08 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 18:36:08 policy-xacml-pdp | sasl.mechanism = GSSAPI 18:36:08 policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 18:36:08 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null 18:36:08 policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null 18:36:08 policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false 18:36:08 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:08 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:08 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:08 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null 18:36:08 policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope 18:36:08 policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub 18:36:08 policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null 18:36:08 policy-xacml-pdp | security.protocol = PLAINTEXT 18:36:08 policy-xacml-pdp | security.providers = null 18:36:08 policy-xacml-pdp | send.buffer.bytes = 131072 18:36:08 policy-xacml-pdp | session.timeout.ms = 45000 18:36:08 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 18:36:08 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 18:36:08 policy-xacml-pdp | ssl.cipher.suites = null 18:36:08 policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:08 policy-xacml-pdp | ssl.endpoint.identification.algorithm = https 18:36:08 policy-xacml-pdp | ssl.engine.factory.class = null 18:36:08 policy-xacml-pdp | ssl.key.password = null 18:36:08 policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 18:36:08 policy-xacml-pdp | ssl.keystore.certificate.chain = null 18:36:08 policy-xacml-pdp | ssl.keystore.key = null 18:36:08 policy-xacml-pdp | ssl.keystore.location = null 18:36:08 policy-xacml-pdp | ssl.keystore.password = null 18:36:08 policy-xacml-pdp | ssl.keystore.type = JKS 18:36:08 policy-xacml-pdp | ssl.protocol = TLSv1.3 18:36:08 policy-xacml-pdp | ssl.provider = null 18:36:08 policy-xacml-pdp | ssl.secure.random.implementation = null 18:36:08 policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX 18:36:08 policy-xacml-pdp | ssl.truststore.certificates = null 18:36:08 policy-xacml-pdp | ssl.truststore.location = null 18:36:08 policy-xacml-pdp | ssl.truststore.password = null 18:36:08 policy-xacml-pdp | ssl.truststore.type = JKS 18:36:08 policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.207+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.349+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.349+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.349+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750012396347 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.351+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-1, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] Subscribed to topic(s): policy-pdp-pap 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.410+00:00|INFO|XacmlPdpApplicationManager|main] Initialization applications org.onap.policy.pdpx.main.parameters.XacmlApplicationParameters@7ec3394b JerseyClient(name=policyApiParameters, https=false, selfSignedCerts=false, hostname=policy-api, port=6969, basePath=null, userName=policyadmin, password=zb!XztG34, client=org.glassfish.jersey.client.JerseyClient@698122b2, baseUrl=http://policy-api:6969/, alive=true) 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.424+00:00|INFO|XacmlPdpApplicationManager|main] Application guard supports [onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0] 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.425+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath guard at this path /opt/app/policy/pdpx/apps/guard 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.426+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/guard 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.428+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/guard/xacml.properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.430+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 18:36:08 policy-xacml-pdp | {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.430+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.persistenceunit -> OperationsHistoryPU 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.430+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.name -> GetOperationOutcome 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.431+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.431+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.431+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.431+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.431+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.431+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.432+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.description -> Returns operation outcome 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.432+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.description -> Returns operation counts based on time window 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.432+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.password -> policy_user 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.432+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.432+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.issuer -> urn:org:onap:xacml:guard:get-operation-outcome 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.432+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.persistenceunit -> OperationsHistoryPU 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.432+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.driver -> org.postgresql.Driver 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.433+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.name -> CountRecentOperations 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.433+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.433+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.433+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.url -> jdbc:postgresql://postgres:5432/operationshistory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.433+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.user -> policy_user 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.433+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.433+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.issuer -> urn:org:onap:xacml:guard:count-recent-operations 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.434+00:00|INFO|XacmlPolicyUtils|main] xacml.pip.engines -> count-recent-operations,get-operation-outcome 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.434+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.434+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.434+00:00|INFO|StdXacmlApplicationServiceProvider|main] {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.437+00:00|WARN|XACMLProperties|main] Properties file /usr/lib/jvm/java-17-openjdk/lib/xacml.properties cannot be read. 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.484+00:00|INFO|XacmlPdpApplicationManager|main] Application optimization supports [onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0] 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.484+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath optimization at this path /opt/app/policy/pdpx/apps/optimization 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.485+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/optimization 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.485+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/optimization/xacml.properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.485+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 18:36:08 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.486+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.486+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.486+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.486+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.486+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.487+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.487+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.487+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.487+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.488+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.488+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.488+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.488+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.490+00:00|INFO|XacmlPdpApplicationManager|main] Application naming supports [onap.policies.Naming 1.0.0] 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.491+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath naming at this path /opt/app/policy/pdpx/apps/naming 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.491+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/naming 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.491+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/naming/xacml.properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.491+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 18:36:08 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.492+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.492+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.492+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.492+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.492+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.493+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.493+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.493+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.493+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.493+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.493+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.493+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.494+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.496+00:00|INFO|XacmlPdpApplicationManager|main] Application native supports [onap.policies.native.Xacml 1.0.0, onap.policies.native.ToscaXacml 1.0.0] 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.496+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath native at this path /opt/app/policy/pdpx/apps/native 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.496+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/native 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.497+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/native/xacml.properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.497+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 18:36:08 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.497+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.497+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.497+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.497+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.498+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.498+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.498+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.498+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.498+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.498+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.499+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.499+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.499+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.500+00:00|INFO|XacmlPdpApplicationManager|main] Application match supports [onap.policies.Match 1.0.0] 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.500+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath match at this path /opt/app/policy/pdpx/apps/match 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.500+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/match 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.500+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/match/xacml.properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.500+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 18:36:08 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.501+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.501+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.501+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.501+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.501+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.503+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.503+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.503+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.503+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.503+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.503+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.503+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.503+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.506+00:00|INFO|XacmlPdpApplicationManager|main] Application monitoring supports [onap.Monitoring 1.0.0] 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.506+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath monitoring at this path /opt/app/policy/pdpx/apps/monitoring 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.506+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/monitoring 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.506+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/monitoring/xacml.properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.506+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 18:36:08 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.506+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.506+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.506+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.506+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.506+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.507+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.507+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.507+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.507+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.507+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.507+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.507+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.507+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.507+00:00|INFO|XacmlPdpApplicationManager|main] Finished applications initialization {optimize=org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplication@2b95e48b, native=org.onap.policy.xacml.pdp.application.nativ.NativePdpApplication@4a3329b9, guard=org.onap.policy.xacml.pdp.application.guard.GuardPdpApplication@3dddefd8, naming=org.onap.policy.xacml.pdp.application.naming.NamingPdpApplication@160ac7fb, match=org.onap.policy.xacml.pdp.application.match.MatchPdpApplication@12bfd80d, configure=org.onap.policy.xacml.pdp.application.monitoring.MonitoringPdpApplication@41925502} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.531+00:00|INFO|XacmlPdpHearbeatPublisher|main] heartbeat topic probe 4000ms 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.720+00:00|INFO|ServiceManager|main] service manager starting 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.720+00:00|INFO|ServiceManager|main] service manager starting XACML PDP parameters 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.720+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.720+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=38fd38ff-592e-4c56-927f-cdd1f27311ce, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5f574cc2 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.733+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=38fd38ff-592e-4c56-927f-cdd1f27311ce, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.734+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 18:36:08 policy-xacml-pdp | allow.auto.create.topics = true 18:36:08 policy-xacml-pdp | auto.commit.interval.ms = 5000 18:36:08 policy-xacml-pdp | auto.include.jmx.reporter = true 18:36:08 policy-xacml-pdp | auto.offset.reset = latest 18:36:08 policy-xacml-pdp | bootstrap.servers = [kafka:9092] 18:36:08 policy-xacml-pdp | check.crcs = true 18:36:08 policy-xacml-pdp | client.dns.lookup = use_all_dns_ips 18:36:08 policy-xacml-pdp | client.id = consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2 18:36:08 policy-xacml-pdp | client.rack = 18:36:08 policy-xacml-pdp | connections.max.idle.ms = 540000 18:36:08 policy-xacml-pdp | default.api.timeout.ms = 60000 18:36:08 policy-xacml-pdp | enable.auto.commit = true 18:36:08 policy-xacml-pdp | enable.metrics.push = true 18:36:08 policy-xacml-pdp | exclude.internal.topics = true 18:36:08 policy-xacml-pdp | fetch.max.bytes = 52428800 18:36:08 policy-xacml-pdp | fetch.max.wait.ms = 500 18:36:08 policy-xacml-pdp | fetch.min.bytes = 1 18:36:08 policy-xacml-pdp | group.id = 38fd38ff-592e-4c56-927f-cdd1f27311ce 18:36:08 policy-xacml-pdp | group.instance.id = null 18:36:08 policy-xacml-pdp | group.protocol = classic 18:36:08 policy-xacml-pdp | group.remote.assignor = null 18:36:08 policy-xacml-pdp | heartbeat.interval.ms = 3000 18:36:08 policy-xacml-pdp | interceptor.classes = [] 18:36:08 policy-xacml-pdp | internal.leave.group.on.close = true 18:36:08 policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 18:36:08 policy-xacml-pdp | isolation.level = read_uncommitted 18:36:08 policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:08 policy-xacml-pdp | max.partition.fetch.bytes = 1048576 18:36:08 policy-xacml-pdp | max.poll.interval.ms = 300000 18:36:08 policy-xacml-pdp | max.poll.records = 500 18:36:08 policy-xacml-pdp | metadata.max.age.ms = 300000 18:36:08 policy-xacml-pdp | metadata.recovery.strategy = none 18:36:08 policy-xacml-pdp | metric.reporters = [] 18:36:08 policy-xacml-pdp | metrics.num.samples = 2 18:36:08 policy-xacml-pdp | metrics.recording.level = INFO 18:36:08 policy-xacml-pdp | metrics.sample.window.ms = 30000 18:36:08 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 18:36:08 policy-xacml-pdp | receive.buffer.bytes = 65536 18:36:08 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 18:36:08 policy-xacml-pdp | reconnect.backoff.ms = 50 18:36:08 policy-xacml-pdp | request.timeout.ms = 30000 18:36:08 policy-xacml-pdp | retry.backoff.max.ms = 1000 18:36:08 policy-xacml-pdp | retry.backoff.ms = 100 18:36:08 policy-xacml-pdp | sasl.client.callback.handler.class = null 18:36:08 policy-xacml-pdp | sasl.jaas.config = null 18:36:08 policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:08 policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 18:36:08 policy-xacml-pdp | sasl.kerberos.service.name = null 18:36:08 policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:08 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:08 policy-xacml-pdp | sasl.login.callback.handler.class = null 18:36:08 policy-xacml-pdp | sasl.login.class = null 18:36:08 policy-xacml-pdp | sasl.login.connect.timeout.ms = null 18:36:08 policy-xacml-pdp | sasl.login.read.timeout.ms = null 18:36:08 policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 18:36:08 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 18:36:08 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 18:36:08 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 18:36:08 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 18:36:08 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 18:36:08 policy-xacml-pdp | sasl.mechanism = GSSAPI 18:36:08 policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 18:36:08 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null 18:36:08 policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null 18:36:08 policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false 18:36:08 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:08 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:08 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:08 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null 18:36:08 policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope 18:36:08 policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub 18:36:08 policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null 18:36:08 policy-xacml-pdp | security.protocol = PLAINTEXT 18:36:08 policy-xacml-pdp | security.providers = null 18:36:08 policy-xacml-pdp | send.buffer.bytes = 131072 18:36:08 policy-xacml-pdp | session.timeout.ms = 45000 18:36:08 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 18:36:08 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 18:36:08 policy-xacml-pdp | ssl.cipher.suites = null 18:36:08 policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:08 policy-xacml-pdp | ssl.endpoint.identification.algorithm = https 18:36:08 policy-xacml-pdp | ssl.engine.factory.class = null 18:36:08 policy-xacml-pdp | ssl.key.password = null 18:36:08 policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 18:36:08 policy-xacml-pdp | ssl.keystore.certificate.chain = null 18:36:08 policy-xacml-pdp | ssl.keystore.key = null 18:36:08 policy-xacml-pdp | ssl.keystore.location = null 18:36:08 policy-xacml-pdp | ssl.keystore.password = null 18:36:08 policy-xacml-pdp | ssl.keystore.type = JKS 18:36:08 policy-xacml-pdp | ssl.protocol = TLSv1.3 18:36:08 policy-xacml-pdp | ssl.provider = null 18:36:08 policy-xacml-pdp | ssl.secure.random.implementation = null 18:36:08 policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX 18:36:08 policy-xacml-pdp | ssl.truststore.certificates = null 18:36:08 policy-xacml-pdp | ssl.truststore.location = null 18:36:08 policy-xacml-pdp | ssl.truststore.password = null 18:36:08 policy-xacml-pdp | ssl.truststore.type = JKS 18:36:08 policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.734+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.748+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.748+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.748+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750012396748 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.749+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] Subscribed to topic(s): policy-pdp-pap 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.749+00:00|INFO|ServiceManager|main] service manager starting topics 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.749+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=38fd38ff-592e-4c56-927f-cdd1f27311ce, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.749+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=dbc9da99-ada3-4c6c-a1d0-30c66d398562, alive=false, publisher=null]]: starting 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.759+00:00|INFO|ProducerConfig|main] ProducerConfig values: 18:36:08 policy-xacml-pdp | acks = -1 18:36:08 policy-xacml-pdp | auto.include.jmx.reporter = true 18:36:08 policy-xacml-pdp | batch.size = 16384 18:36:08 policy-xacml-pdp | bootstrap.servers = [kafka:9092] 18:36:08 policy-xacml-pdp | buffer.memory = 33554432 18:36:08 policy-xacml-pdp | client.dns.lookup = use_all_dns_ips 18:36:08 policy-xacml-pdp | client.id = producer-1 18:36:08 policy-xacml-pdp | compression.gzip.level = -1 18:36:08 policy-xacml-pdp | compression.lz4.level = 9 18:36:08 policy-xacml-pdp | compression.type = none 18:36:08 policy-xacml-pdp | compression.zstd.level = 3 18:36:08 policy-xacml-pdp | connections.max.idle.ms = 540000 18:36:08 policy-xacml-pdp | delivery.timeout.ms = 120000 18:36:08 policy-xacml-pdp | enable.idempotence = true 18:36:08 policy-xacml-pdp | enable.metrics.push = true 18:36:08 policy-xacml-pdp | interceptor.classes = [] 18:36:08 policy-xacml-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 18:36:08 policy-xacml-pdp | linger.ms = 0 18:36:08 policy-xacml-pdp | max.block.ms = 60000 18:36:08 policy-xacml-pdp | max.in.flight.requests.per.connection = 5 18:36:08 policy-xacml-pdp | max.request.size = 1048576 18:36:08 policy-xacml-pdp | metadata.max.age.ms = 300000 18:36:08 policy-xacml-pdp | metadata.max.idle.ms = 300000 18:36:08 policy-xacml-pdp | metadata.recovery.strategy = none 18:36:08 policy-xacml-pdp | metric.reporters = [] 18:36:08 policy-xacml-pdp | metrics.num.samples = 2 18:36:08 policy-xacml-pdp | metrics.recording.level = INFO 18:36:08 policy-xacml-pdp | metrics.sample.window.ms = 30000 18:36:08 policy-xacml-pdp | partitioner.adaptive.partitioning.enable = true 18:36:08 policy-xacml-pdp | partitioner.availability.timeout.ms = 0 18:36:08 policy-xacml-pdp | partitioner.class = null 18:36:08 policy-xacml-pdp | partitioner.ignore.keys = false 18:36:08 policy-xacml-pdp | receive.buffer.bytes = 32768 18:36:08 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 18:36:08 policy-xacml-pdp | reconnect.backoff.ms = 50 18:36:08 policy-xacml-pdp | request.timeout.ms = 30000 18:36:08 policy-xacml-pdp | retries = 2147483647 18:36:08 policy-xacml-pdp | retry.backoff.max.ms = 1000 18:36:08 policy-xacml-pdp | retry.backoff.ms = 100 18:36:08 policy-xacml-pdp | sasl.client.callback.handler.class = null 18:36:08 policy-xacml-pdp | sasl.jaas.config = null 18:36:08 policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 18:36:08 policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 18:36:08 policy-xacml-pdp | sasl.kerberos.service.name = null 18:36:08 policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 18:36:08 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 18:36:08 policy-xacml-pdp | sasl.login.callback.handler.class = null 18:36:08 policy-xacml-pdp | sasl.login.class = null 18:36:08 policy-xacml-pdp | sasl.login.connect.timeout.ms = null 18:36:08 policy-xacml-pdp | sasl.login.read.timeout.ms = null 18:36:08 policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 18:36:08 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 18:36:08 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 18:36:08 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 18:36:08 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 18:36:08 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 18:36:08 policy-xacml-pdp | sasl.mechanism = GSSAPI 18:36:08 policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 18:36:08 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null 18:36:08 policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null 18:36:08 policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false 18:36:08 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 18:36:08 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 18:36:08 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 18:36:08 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null 18:36:08 policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope 18:36:08 policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub 18:36:08 policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null 18:36:08 policy-xacml-pdp | security.protocol = PLAINTEXT 18:36:08 policy-xacml-pdp | security.providers = null 18:36:08 policy-xacml-pdp | send.buffer.bytes = 131072 18:36:08 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 18:36:08 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 18:36:08 policy-xacml-pdp | ssl.cipher.suites = null 18:36:08 policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 18:36:08 policy-xacml-pdp | ssl.endpoint.identification.algorithm = https 18:36:08 policy-xacml-pdp | ssl.engine.factory.class = null 18:36:08 policy-xacml-pdp | ssl.key.password = null 18:36:08 policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 18:36:08 policy-xacml-pdp | ssl.keystore.certificate.chain = null 18:36:08 policy-xacml-pdp | ssl.keystore.key = null 18:36:08 policy-xacml-pdp | ssl.keystore.location = null 18:36:08 policy-xacml-pdp | ssl.keystore.password = null 18:36:08 policy-xacml-pdp | ssl.keystore.type = JKS 18:36:08 policy-xacml-pdp | ssl.protocol = TLSv1.3 18:36:08 policy-xacml-pdp | ssl.provider = null 18:36:08 policy-xacml-pdp | ssl.secure.random.implementation = null 18:36:08 policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX 18:36:08 policy-xacml-pdp | ssl.truststore.certificates = null 18:36:08 policy-xacml-pdp | ssl.truststore.location = null 18:36:08 policy-xacml-pdp | ssl.truststore.password = null 18:36:08 policy-xacml-pdp | ssl.truststore.type = JKS 18:36:08 policy-xacml-pdp | transaction.timeout.ms = 60000 18:36:08 policy-xacml-pdp | transactional.id = null 18:36:08 policy-xacml-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.760+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.769+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.789+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.789+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.789+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750012396789 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.789+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=dbc9da99-ada3-4c6c-a1d0-30c66d398562, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.789+00:00|INFO|ServiceManager|main] service manager starting Terminate PDP 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.789+00:00|INFO|ServiceManager|main] service manager starting Heartbeat Publisher 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.790+00:00|INFO|ServiceManager|main] service manager starting REST Server 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.790+00:00|INFO|ServiceManager|main] service manager starting 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.790+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.799+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=38fd38ff-592e-4c56-927f-cdd1f27311ce, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: registering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007fc70b2ad878@152b0d93 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.799+00:00|INFO|SingleThreadedBusTopicSource|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=38fd38ff-592e-4c56-927f-cdd1f27311ce, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=2]]]]: register: start not attempted 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.790+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.802+00:00|INFO|OrderedServiceImpl|pool-2-thread-1] ***** OrderedServiceImpl implementers: 18:36:08 policy-xacml-pdp | [] 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.802+00:00|INFO|ServiceManager|main] service manager started 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.802+00:00|INFO|ServiceManager|main] service manager started 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.803+00:00|INFO|Main|main] Started policy-xacml-pdp service successfully. 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.804+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"59cb1196-80dd-40cb-8438-ae0650afa48f","timestampMs":1750012396791,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:16.803+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:17.166+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: y4Rm7-C7SZiMksPpYccgvw 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:17.166+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] Cluster ID: y4Rm7-C7SZiMksPpYccgvw 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:17.167+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:17.167+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:17.176+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] (Re-)joining group 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:17.192+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] Request joining group due to: need to re-join with the given member-id: consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2-4dda311c-3071-4b89-8df7-26c04c67b5ce 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:17.192+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] (Re-)joining group 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:17.393+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:17.394+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:20.197+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] Successfully joined group with generation Generation{generationId=1, memberId='consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2-4dda311c-3071-4b89-8df7-26c04c67b5ce', protocol='range'} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:20.208+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] Finished assignment for group at generation 1: {consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2-4dda311c-3071-4b89-8df7-26c04c67b5ce=Assignment(partitions=[policy-pdp-pap-0])} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:20.216+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] Successfully synced group in generation Generation{generationId=1, memberId='consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2-4dda311c-3071-4b89-8df7-26c04c67b5ce', protocol='range'} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:20.217+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:20.219+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] Adding newly assigned partitions: policy-pdp-pap-0 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:20.225+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] Found no committed offset for partition policy-pdp-pap-0 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:20.235+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-38fd38ff-592e-4c56-927f-cdd1f27311ce-2, groupId=38fd38ff-592e-4c56-927f-cdd1f27311ce] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:21.253+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"59cb1196-80dd-40cb-8438-ae0650afa48f","timestampMs":1750012396791,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:21.292+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"59cb1196-80dd-40cb-8438-ae0650afa48f","timestampMs":1750012396791,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:21.295+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:21.296+00:00|INFO|BidirectionalTopicClient|KAFKA-source-policy-pdp-pap] topic policy-pdp-pap is ready; found matching message PdpTopicCheck(super=PdpMessage(messageName=PDP_TOPIC_CHECK, requestId=59cb1196-80dd-40cb-8438-ae0650afa48f, timestampMs=1750012396791, name=xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f, pdpGroup=null, pdpSubgroup=null)) 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:21.302+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=38fd38ff-592e-4c56-927f-cdd1f27311ce, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=1, locked=false, #topicListeners=2]]]]: unregistering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007fc70b2ad878@152b0d93 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:21.304+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=ddacd0a5-e547-476c-a7cb-6000253400d0, timestampMs=1750012401302, name=xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f, pdpGroup=defaultGroup, pdpSubgroup=null), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[], deploymentInstanceInfo=null, properties=null, response=null) 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:21.312+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"ddacd0a5-e547-476c-a7cb-6000253400d0","timestampMs":1750012401302,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:21.351+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"ddacd0a5-e547-476c-a7cb-6000253400d0","timestampMs":1750012401302,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:21.352+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.105+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"2bcbbf1d-f67f-476c-bca6-85781a33e3c1","timestampMs":1750012402007,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.111+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=2bcbbf1d-f67f-476c-bca6-85781a33e3c1, timestampMs=1750012402007, name=xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.Naming, typeVersion=1.0.0, properties={policy-instance-name=ONAP_NF_NAMING_TIMESTAMP, naming-models=[{naming-type=VNF, naming-recipe=AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP, name-operation=to_lower_case(), naming-properties=[{property-name=AIC_CLOUD_REGION}, {property-name=CONSTANT, property-value=onap-nf}, {property-name=TIMESTAMP}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VNFC, naming-recipe=VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=ENTIRETY, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}, {property-name=NFC_NAMING_CODE}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VF-MODULE, naming-recipe=VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-value=-, property-name=DELIMITER}, {property-name=VF_MODULE_LABEL}, {property-name=VF_MODULE_TYPE}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=PRECEEDING, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}]}]}))], policiesToBeUndeployed=[]) 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.118+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP type: onap.policies.Naming weight: null policy: 18:36:08 policy-xacml-pdp | {"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.204+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | onap.policies.Naming 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | onap.policies.Naming 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 1.0.0 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | Default is to PERMIT if the policy matches. 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | {"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}} 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | onap.policies.Naming 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.209+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:08 policy-xacml-pdp | /opt/app/policy/pdpx/apps/naming/xacml.properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.217+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP, policy-version=1.0.0} into application naming 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.218+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"2bcbbf1d-f67f-476c-bca6-85781a33e3c1","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"c5434883-83d8-4d0d-85b8-dace443a1c6b","timestampMs":1750012402217,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.227+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=44d556b1-c408-48c6-85fd-a53c71fd7ad9, timestampMs=1750012402227, name=xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.228+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"44d556b1-c408-48c6-85fd-a53c71fd7ad9","timestampMs":1750012402227,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.236+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"2bcbbf1d-f67f-476c-bca6-85781a33e3c1","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"c5434883-83d8-4d0d-85b8-dace443a1c6b","timestampMs":1750012402217,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.236+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.246+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"44d556b1-c408-48c6-85fd-a53c71fd7ad9","timestampMs":1750012402227,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.247+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.300+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"bbc65358-d1d0-4084-b9ab-42951b7c0e24","timestampMs":1750012402008,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.302+00:00|INFO|XacmlPdpStateChangeListener|KAFKA-source-policy-pdp-pap] PDP State Change message has been received from the PAP - PdpStateChange(super=PdpMessage(messageName=PDP_STATE_CHANGE, requestId=bbc65358-d1d0-4084-b9ab-42951b7c0e24, timestampMs=1750012402008, name=xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5, state=ACTIVE) 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.302+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] set state of org.onap.policy.pdpx.main.XacmlState@19e2f5fd to ACTIVE 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.303+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] State change: ACTIVE - Starting rest controller 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.303+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"bbc65358-d1d0-4084-b9ab-42951b7c0e24","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"4863b4e9-9446-4008-b1a6-205a759de7cc","timestampMs":1750012402303,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.319+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"bbc65358-d1d0-4084-b9ab-42951b7c0e24","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"4863b4e9-9446-4008-b1a6-205a759de7cc","timestampMs":1750012402303,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.320+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.910+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7c37b551-e594-4cd3-9436-e93d096d1bb9","timestampMs":1750012402616,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.911+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=7c37b551-e594-4cd3-9436-e93d096d1bb9, timestampMs=1750012402616, name=xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[], policiesToBeUndeployed=[]) 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.912+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"7c37b551-e594-4cd3-9436-e93d096d1bb9","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"5f3766ab-697c-4621-97ba-1f6ca2555905","timestampMs":1750012402911,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.919+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"7c37b551-e594-4cd3-9436-e93d096d1bb9","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"5f3766ab-697c-4621-97ba-1f6ca2555905","timestampMs":1750012402911,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:22.919+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:35.657+00:00|INFO|RequestLog|qtp2014233765-33] 172.17.0.2 - policyadmin [15/Jun/2025:18:33:35 +0000] "GET /metrics HTTP/1.1" 200 2133 "" "Prometheus/3.4.1" 18:36:08 policy-xacml-pdp | [2025-06-15T18:33:41.362+00:00|INFO|RequestLog|qtp2014233765-28] 172.17.0.1 - - [15/Jun/2025:18:33:41 +0000] "GET / HTTP/1.1" 401 423 "" "curl/7.58.0" 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:35.570+00:00|INFO|RequestLog|qtp2014233765-27] 172.17.0.2 - policyadmin [15/Jun/2025:18:34:35 +0000] "GET /metrics HTTP/1.1" 200 2130 "" "Prometheus/3.4.1" 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:36.145+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.7 - policyadmin [15/Jun/2025:18:34:36 +0000] "GET /policy/pdpx/v1/healthcheck?null HTTP/1.1" 200 110 "" "python-requests/2.32.4" 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:36.162+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.7 - policyadmin [15/Jun/2025:18:34:36 +0000] "GET /metrics?null HTTP/1.1" 200 2055 "" "python-requests/2.32.4" 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.609+00:00|INFO|GuardTranslator|qtp2014233765-30] Converting Request DecisionRequest(onapName=Guard, onapComponent=Guard-component, onapInstance=Guard-component-instance, requestId=unique-request-guard-1, context=null, action=guard, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={guard={actor=APPC, operation=ModifyConfig, target=f17face5-69cb-4c88-9e0b-7426db7edddd, requestId=c7c6a4aa-bb61-4a15-b831-ba1472dd4a65, clname=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}}) 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.630+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-dateTime 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.630+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-date 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.630+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-time 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.630+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:guard:timezone 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.631+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:guard:target:vf-count 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.631+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-name 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.631+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-id 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.632+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-type 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.632+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.nf-naming-code 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.632+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:guard:target:vserver.vserver-id 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.632+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:guard:target:cloud-region.cloud-region-id 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.637+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Constructed using properties {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.637+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Initializing OnapPolicyFinderFactory Properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.637+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Combining root policies with urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.649+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Root Policies: 1 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.649+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Referenced Policies: 0 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.651+00:00|INFO|StdPolicyFinder|qtp2014233765-30] Updating policy map with policy 59921729-0deb-4106-9fca-035b67d9da79 version 1.0 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.653+00:00|INFO|StdOnapPip|qtp2014233765-30] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.738+00:00|INFO|LogHelper|qtp2014233765-30] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.770+00:00|INFO|Version|qtp2014233765-30] HHH000412: Hibernate ORM core version 6.6.16.Final 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.792+00:00|INFO|RegionFactoryInitiator|qtp2014233765-30] HHH000026: Second-level cache disabled 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:37.927+00:00|WARN|pooling|qtp2014233765-30] HHH10001002: Using built-in connection pool (not intended for production use) 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:38.127+00:00|INFO|pooling|qtp2014233765-30] HHH10001005: Database info: 18:36:08 policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] 18:36:08 policy-xacml-pdp | Database driver: org.postgresql.Driver 18:36:08 policy-xacml-pdp | Database version: 16.4 18:36:08 policy-xacml-pdp | Autocommit mode: false 18:36:08 policy-xacml-pdp | Isolation level: undefined/unknown 18:36:08 policy-xacml-pdp | Minimum pool size: 1 18:36:08 policy-xacml-pdp | Maximum pool size: 20 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.085+00:00|INFO|JtaPlatformInitiator|qtp2014233765-30] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.119+00:00|INFO|StdOnapPip|qtp2014233765-30] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.122+00:00|INFO|LogHelper|qtp2014233765-30] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.123+00:00|INFO|RegionFactoryInitiator|qtp2014233765-30] HHH000026: Second-level cache disabled 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.142+00:00|WARN|pooling|qtp2014233765-30] HHH10001002: Using built-in connection pool (not intended for production use) 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.164+00:00|INFO|pooling|qtp2014233765-30] HHH10001005: Database info: 18:36:08 policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] 18:36:08 policy-xacml-pdp | Database driver: org.postgresql.Driver 18:36:08 policy-xacml-pdp | Database version: 16.4 18:36:08 policy-xacml-pdp | Autocommit mode: false 18:36:08 policy-xacml-pdp | Isolation level: undefined/unknown 18:36:08 policy-xacml-pdp | Minimum pool size: 1 18:36:08 policy-xacml-pdp | Maximum pool size: 20 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.195+00:00|INFO|JtaPlatformInitiator|qtp2014233765-30] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.199+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-30] Elapsed Time: 1567ms 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.199+00:00|INFO|GuardTranslator|qtp2014233765-30] Converting Response {results=[{decision=NotApplicable,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component-instance}],includeInResults=true}{attributeId=urn:org:onap:guard:request:request-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=unique-request-guard-1}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:guard:clname:clname-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}],includeInResults=true}{attributeId=urn:org:onap:guard:actor:actor-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=APPC}],includeInResults=true}{attributeId=urn:org:onap:guard:operation:operation-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ModifyConfig}],includeInResults=true}{attributeId=urn:org:onap:guard:target:target-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=f17face5-69cb-4c88-9e0b-7426db7edddd}],includeInResults=true}]}]}]} 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.203+00:00|INFO|RequestLog|qtp2014233765-30] 172.17.0.7 - policyadmin [15/Jun/2025:18:34:37 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 19 "" "python-requests/2.32.4" 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.846+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"20fcec9c-0e98-48c5-b4b3-b726a44b58b4","timestampMs":1750012479761,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.847+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=20fcec9c-0e98-48c5-b4b3-b726a44b58b4, timestampMs=1750012479761, name=xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.monitoring.tcagen2, typeVersion=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}})), ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.optimization.resource.AffinityPolicy, typeVersion=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}))], policiesToBeUndeployed=[]) 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.848+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: onap.restart.tca type: onap.policies.monitoring.tcagen2 weight: null policy: 18:36:08 policy-xacml-pdp | {"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}} 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.872+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | onap.restart.tca 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | onap.policies.monitoring.tcagen2 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | onap.policies.monitoring.tcagen2 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 1.0.0 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | Default is to PERMIT if the policy matches. 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | onap.restart.tca 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | {"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}} 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | onap.policies.monitoring.tcagen2 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.872+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:08 policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.873+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} into application monitoring 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.873+00:00|INFO|OptimizationPdpApplication|KAFKA-source-policy-pdp-pap] optimization can support onap.policies.optimization.resource.AffinityPolicy 1.0.0 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.874+00:00|ERROR|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] PolicyType not found in data area yet /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml 18:36:08 policy-xacml-pdp | java.nio.file.NoSuchFileException: /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml 18:36:08 policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) 18:36:08 policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) 18:36:08 policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) 18:36:08 policy-xacml-pdp | at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:218) 18:36:08 policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:380) 18:36:08 policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:432) 18:36:08 policy-xacml-pdp | at java.base/java.nio.file.Files.readAllBytes(Files.java:3288) 18:36:08 policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.loadPolicyType(StdMatchableTranslator.java:515) 18:36:08 policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.findPolicyType(StdMatchableTranslator.java:480) 18:36:08 policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.convertPolicy(StdMatchableTranslator.java:241) 18:36:08 policy-xacml-pdp | at org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplicationTranslator.convertPolicy(OptimizationPdpApplicationTranslator.java:72) 18:36:08 policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdXacmlApplicationServiceProvider.loadPolicy(StdXacmlApplicationServiceProvider.java:127) 18:36:08 policy-xacml-pdp | at org.onap.policy.pdpx.main.rest.XacmlPdpApplicationManager.loadDeployedPolicy(XacmlPdpApplicationManager.java:199) 18:36:08 policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.XacmlPdpUpdatePublisher.handlePdpUpdate(XacmlPdpUpdatePublisher.java:91) 18:36:08 policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:72) 18:36:08 policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:36) 18:36:08 policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.ScoListener.onTopicEvent(ScoListener.java:75) 18:36:08 policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher.onTopicEvent(MessageTypeDispatcher.java:97) 18:36:08 policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.JsonListener.onTopicEvent(JsonListener.java:61) 18:36:08 policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.TopicBase.broadcast(TopicBase.java:170) 18:36:08 policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.fetchAllMessages(SingleThreadedBusTopicSource.java:252) 18:36:08 policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.run(SingleThreadedBusTopicSource.java:235) 18:36:08 policy-xacml-pdp | at java.base/java.lang.Thread.run(Thread.java:840) 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.910+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:39.913+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.378+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] Successfully pulled onap.policies.optimization.resource.AffinityPolicy 1.0.0 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.415+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.resource.AffinityPolicy:1.0.0 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.415+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Retrieving datatype policy.data.affinityProperties_properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.415+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.Resource:1.0.0 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.416+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.Optimization:1.0.0 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.416+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Found root - done scanning 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.416+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: OSDF_CASABLANCA.Affinity_Default type: onap.policies.optimization.resource.AffinityPolicy weight: 0 policy: 18:36:08 policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.432+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | Default is to PERMIT if the policy matches. 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | IF exists and is equal 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | Does the policy-type attribute exist? 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | Get the size of policy-type attributes 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 0 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | Is this policy-type in the list? 18:36:08 policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | OSDF_CASABLANCA.Affinity_Default 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 0 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.447+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | Default is to PERMIT if the policy matches. 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | IF exists and is equal 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | Does the policy-type attribute exist? 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | Get the size of policy-type attributes 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 0 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | Is this policy-type in the list? 18:36:08 policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | OSDF_CASABLANCA.Affinity_Default 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 0 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.447+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:08 policy-xacml-pdp | /opt/app/policy/pdpx/apps/optimization/xacml.properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.448+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0} into application optimization 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.448+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"20fcec9c-0e98-48c5-b4b3-b726a44b58b4","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"e9b3e87d-ebc9-4b81-8fd3-0014200a4b83","timestampMs":1750012480448,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.463+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"20fcec9c-0e98-48c5-b4b3-b726a44b58b4","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"e9b3e87d-ebc9-4b81-8fd3-0014200a4b83","timestampMs":1750012480448,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:34:40.463+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.041+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.043+00:00|WARN|RequestParser|qtp2014233765-29] Unable to extract attribute value from object: urn:org:onap:policy-type 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.043+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-29] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.043+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-29] Initializing OnapPolicyFinderFactory Properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.043+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-29] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.044+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-29] Loading policy file /opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.061+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-29] Root Policies: 1 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.061+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-29] Referenced Policies: 0 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.061+00:00|INFO|StdPolicyFinder|qtp2014233765-29] Updating policy map with policy d1777dad-3f49-4160-b7c1-356772d9933c version 1.0 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.061+00:00|INFO|StdPolicyFinder|qtp2014233765-29] Updating policy map with policy onap.restart.tca version 1.0.0 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.077+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-29] Elapsed Time: 34ms 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.077+00:00|INFO|StdBaseTranslator|qtp2014233765-29] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=d1777dad-3f49-4160-b7c1-356772d9933c,version=1.0}]}]} 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.077+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Obligation: urn:org:onap:rest:body 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.078+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.078+00:00|INFO|MonitoringPdpApplication|qtp2014233765-29] Abbreviating decision results DecisionResponse(status=null, message=null, advice=null, obligations=null, policies={onap.restart.tca={type=onap.policies.monitoring.tcagen2, type_version=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}}, name=onap.restart.tca, version=1.0.0, metadata={policy-id=onap.restart.tca, policy-version=1.0.0}}}, attributes=null) 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.080+00:00|INFO|RequestLog|qtp2014233765-29] 172.17.0.7 - policyadmin [15/Jun/2025:18:35:04 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 146 "" "python-requests/2.32.4" 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.097+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.098+00:00|WARN|RequestParser|qtp2014233765-29] Unable to extract attribute value from object: urn:org:onap:policy-type 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.098+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-29] Elapsed Time: 0ms 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.099+00:00|INFO|StdBaseTranslator|qtp2014233765-29] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=d1777dad-3f49-4160-b7c1-356772d9933c,version=1.0}]}]} 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.099+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Obligation: urn:org:onap:rest:body 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.099+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.100+00:00|INFO|MonitoringPdpApplication|qtp2014233765-29] Unsupported query param for Monitoring application: {null=[]} 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.103+00:00|INFO|RequestLog|qtp2014233765-29] 172.17.0.7 - policyadmin [15/Jun/2025:18:35:04 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1055 "" "python-requests/2.32.4" 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.119+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-33] Converting Request DecisionRequest(onapName=SDNC, onapComponent=SDNC-component, onapInstance=SDNC-component-instance, requestId=unique-request-sdnc-1, context=null, action=naming, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={nfRole=[], naming-type=[], property-name=[], policy-type=[onap.policies.Naming]}) 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.119+00:00|WARN|RequestParser|qtp2014233765-33] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:resource:resource-id 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.120+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.120+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Initializing OnapPolicyFinderFactory Properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.120+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.120+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Loading policy file /opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.127+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Root Policies: 1 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.127+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-33] Referenced Policies: 0 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.127+00:00|INFO|StdPolicyFinder|qtp2014233765-33] Updating policy map with policy 916e7e6a-580c-4367-ad4d-3522b81da881 version 1.0 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.127+00:00|INFO|StdPolicyFinder|qtp2014233765-33] Updating policy map with policy SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP version 1.0.0 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.128+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-33] Elapsed Time: 9ms 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.128+00:00|INFO|StdBaseTranslator|qtp2014233765-33] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component-instance}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:policy-type,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}],includeInResults=true}]}],policyIdentifiers=[{id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP,version=1.0.0}],policySetIdentifiers=[{id=916e7e6a-580c-4367-ad4d-3522b81da881,version=1.0}]}]} 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.128+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-33] Obligation: urn:org:onap:rest:body 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.129+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-33] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.131+00:00|INFO|RequestLog|qtp2014233765-33] 172.17.0.7 - policyadmin [15/Jun/2025:18:35:04 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1598 "" "python-requests/2.32.4" 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.148+00:00|INFO|StdMatchableTranslator|qtp2014233765-32] Converting Request DecisionRequest(onapName=OOF, onapComponent=OOF-component, onapInstance=OOF-component-instance, requestId=null, context={subscriberName=[]}, action=optimize, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={scope=[], services=[], resources=[], geography=[]}) 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.151+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-32] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.151+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-32] Initializing OnapPolicyFinderFactory Properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.151+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-32] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.151+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-32] Loading policy file /opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.158+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-32] Root Policies: 1 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.158+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-32] Referenced Policies: 0 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.158+00:00|INFO|StdPolicyFinder|qtp2014233765-32] Updating policy map with policy 75367997-fd31-4f73-bfbb-081952577b25 version 1.0 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.158+00:00|INFO|StdPolicyFinder|qtp2014233765-32] Updating policy map with policy OSDF_CASABLANCA.Affinity_Default version 1.0.0 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.159+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-32] Elapsed Time: 9ms 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.159+00:00|INFO|StdBaseTranslator|qtp2014233765-32] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OSDF_CASABLANCA.Affinity_Default}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:weight,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#integer,value=0}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.optimization.resource.AffinityPolicy}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component-instance}],includeInResults=true}]}],policyIdentifiers=[{id=OSDF_CASABLANCA.Affinity_Default,version=1.0.0}],policySetIdentifiers=[{id=75367997-fd31-4f73-bfbb-081952577b25,version=1.0}]}]} 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.160+00:00|INFO|StdMatchableTranslator|qtp2014233765-32] Obligation: urn:org:onap:rest:body 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.160+00:00|INFO|StdMatchableTranslator|qtp2014233765-32] New entry onap.policies.optimization.resource.AffinityPolicy weight 0 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.161+00:00|INFO|StdMatchableTranslator|qtp2014233765-32] Policy (OSDF_CASABLANCA.Affinity_Default,{type=onap.policies.optimization.resource.AffinityPolicy, type_version=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}, name=OSDF_CASABLANCA.Affinity_Default, version=1.0.0, metadata={policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0}}) 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.162+00:00|INFO|RequestLog|qtp2014233765-32] 172.17.0.7 - policyadmin [15/Jun/2025:18:35:04 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 467 "" "python-requests/2.32.4" 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.581+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"source":"pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"f580fdd1-64fd-40bb-a9c6-117e2e9171ce","timestampMs":1750012504541,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.582+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=f580fdd1-64fd-40bb-a9c6-117e2e9171ce, timestampMs=1750012504541, name=xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-dfc484cc-26cb-4183-ad53-36f2b3c8ded5, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[], policiesToBeUndeployed=[onap.restart.tca 1.0.0]) 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.582+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.582+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.582+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.582+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.582+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.582+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 18:36:08 policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.583+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Unloaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} from application monitoring 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.583+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"f580fdd1-64fd-40bb-a9c6-117e2e9171ce","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"8b0c8a16-87af-47f3-8269-e085ca869e1f","timestampMs":1750012504583,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.592+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"f580fdd1-64fd-40bb-a9c6-117e2e9171ce","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"8b0c8a16-87af-47f3-8269-e085ca869e1f","timestampMs":1750012504583,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:04.592+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:22.238+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=f1a83c11-d55e-46b8-8140-e2781af980b9, timestampMs=1750012522238, name=xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=ACTIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0, OSDF_CASABLANCA.Affinity_Default 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:22.238+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"f1a83c11-d55e-46b8-8140-e2781af980b9","timestampMs":1750012522238,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:22.251+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 18:36:08 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"f1a83c11-d55e-46b8-8140-e2781af980b9","timestampMs":1750012522238,"name":"xacml-1458e97f-cafa-402f-b553-6d94f0f3f22f","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:22.251+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 18:36:08 policy-xacml-pdp | [2025-06-15T18:35:35.574+00:00|INFO|RequestLog|qtp2014233765-32] 172.17.0.2 - policyadmin [15/Jun/2025:18:35:35 +0000] "GET /metrics HTTP/1.1" 200 2211 "" "Prometheus/3.4.1" 18:36:08 postgres | The files belonging to this database system will be owned by user "postgres". 18:36:08 postgres | This user must also own the server process. 18:36:08 postgres | 18:36:08 postgres | The database cluster will be initialized with locale "en_US.utf8". 18:36:08 postgres | The default database encoding has accordingly been set to "UTF8". 18:36:08 postgres | The default text search configuration will be set to "english". 18:36:08 postgres | 18:36:08 postgres | Data page checksums are disabled. 18:36:08 postgres | 18:36:08 postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok 18:36:08 postgres | creating subdirectories ... ok 18:36:08 postgres | selecting dynamic shared memory implementation ... posix 18:36:08 postgres | selecting default max_connections ... 100 18:36:08 postgres | selecting default shared_buffers ... 128MB 18:36:08 postgres | selecting default time zone ... Etc/UTC 18:36:08 postgres | creating configuration files ... ok 18:36:08 postgres | running bootstrap script ... ok 18:36:08 postgres | performing post-bootstrap initialization ... ok 18:36:08 postgres | syncing data to disk ... ok 18:36:08 postgres | 18:36:08 postgres | 18:36:08 postgres | Success. You can now start the database server using: 18:36:08 postgres | 18:36:08 postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start 18:36:08 postgres | 18:36:08 postgres | initdb: warning: enabling "trust" authentication for local connections 18:36:08 postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. 18:36:08 postgres | waiting for server to start....2025-06-15 18:32:38.374 UTC [47] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 18:36:08 postgres | 2025-06-15 18:32:38.375 UTC [47] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 18:36:08 postgres | 2025-06-15 18:32:38.412 UTC [50] LOG: database system was shut down at 2025-06-15 18:32:37 UTC 18:36:08 postgres | 2025-06-15 18:32:38.417 UTC [47] LOG: database system is ready to accept connections 18:36:08 postgres | done 18:36:08 postgres | server started 18:36:08 postgres | 18:36:08 postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf 18:36:08 postgres | 18:36:08 postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh 18:36:08 postgres | #!/bin/bash -xv 18:36:08 postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved 18:36:08 postgres | # 18:36:08 postgres | # Licensed under the Apache License, Version 2.0 (the "License"); 18:36:08 postgres | # you may not use this file except in compliance with the License. 18:36:08 postgres | # You may obtain a copy of the License at 18:36:08 postgres | # 18:36:08 postgres | # http://www.apache.org/licenses/LICENSE-2.0 18:36:08 postgres | # 18:36:08 postgres | # Unless required by applicable law or agreed to in writing, software 18:36:08 postgres | # distributed under the License is distributed on an "AS IS" BASIS, 18:36:08 postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 18:36:08 postgres | # See the License for the specific language governing permissions and 18:36:08 postgres | # limitations under the License. 18:36:08 postgres | 18:36:08 postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" 18:36:08 postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' 18:36:08 postgres | CREATE ROLE 18:36:08 postgres | 18:36:08 postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:08 postgres | do 18:36:08 postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" 18:36:08 postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" 18:36:08 postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" 18:36:08 postgres | done 18:36:08 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:08 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' 18:36:08 postgres | CREATE DATABASE 18:36:08 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' 18:36:08 postgres | ALTER DATABASE 18:36:08 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' 18:36:08 postgres | GRANT 18:36:08 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:08 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' 18:36:08 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' 18:36:08 postgres | CREATE DATABASE 18:36:08 postgres | ALTER DATABASE 18:36:08 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' 18:36:08 postgres | GRANT 18:36:08 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:08 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' 18:36:08 postgres | CREATE DATABASE 18:36:08 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' 18:36:08 postgres | ALTER DATABASE 18:36:08 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' 18:36:08 postgres | GRANT 18:36:08 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:08 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' 18:36:08 postgres | CREATE DATABASE 18:36:08 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' 18:36:08 postgres | ALTER DATABASE 18:36:08 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' 18:36:08 postgres | GRANT 18:36:08 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:08 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' 18:36:08 postgres | CREATE DATABASE 18:36:08 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' 18:36:08 postgres | ALTER DATABASE 18:36:08 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' 18:36:08 postgres | GRANT 18:36:08 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 18:36:08 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' 18:36:08 postgres | CREATE DATABASE 18:36:08 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' 18:36:08 postgres | ALTER DATABASE 18:36:08 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' 18:36:08 postgres | GRANT 18:36:08 postgres | 18:36:08 postgres | waiting for server to shut down....2025-06-15 18:32:39.913 UTC [47] LOG: received fast shutdown request 18:36:08 postgres | 2025-06-15 18:32:39.915 UTC [47] LOG: aborting any active transactions 18:36:08 postgres | 2025-06-15 18:32:39.919 UTC [47] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1 18:36:08 postgres | 2025-06-15 18:32:39.919 UTC [48] LOG: shutting down 18:36:08 postgres | 2025-06-15 18:32:39.921 UTC [48] LOG: checkpoint starting: shutdown immediate 18:36:08 postgres | 2025-06-15 18:32:40.761 UTC [48] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.541 s, sync=0.290 s, total=0.842 s; sync files=1788, longest=0.032 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 18:36:08 postgres | 2025-06-15 18:32:40.771 UTC [47] LOG: database system is shut down 18:36:08 postgres | done 18:36:08 postgres | server stopped 18:36:08 postgres | 18:36:08 postgres | PostgreSQL init process complete; ready for start up. 18:36:08 postgres | 18:36:08 postgres | 2025-06-15 18:32:40.840 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 18:36:08 postgres | 2025-06-15 18:32:40.840 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 18:36:08 postgres | 2025-06-15 18:32:40.840 UTC [1] LOG: listening on IPv6 address "::", port 5432 18:36:08 postgres | 2025-06-15 18:32:40.846 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 18:36:08 postgres | 2025-06-15 18:32:40.859 UTC [100] LOG: database system was shut down at 2025-06-15 18:32:40 UTC 18:36:08 postgres | 2025-06-15 18:32:40.868 UTC [1] LOG: database system is ready to accept connections 18:36:08 prometheus | time=2025-06-15T18:32:34.128Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d 18:36:08 prometheus | time=2025-06-15T18:32:34.128Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" 18:36:08 prometheus | time=2025-06-15T18:32:34.128Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" 18:36:08 prometheus | time=2025-06-15T18:32:34.133Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs 18:36:08 prometheus | time=2025-06-15T18:32:34.137Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 18:36:08 prometheus | time=2025-06-15T18:32:34.139Z level=INFO source=main.go:1266 msg="Starting TSDB ..." 18:36:08 prometheus | time=2025-06-15T18:32:34.140Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 18:36:08 prometheus | time=2025-06-15T18:32:34.140Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 18:36:08 prometheus | time=2025-06-15T18:32:34.144Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb 18:36:08 prometheus | time=2025-06-15T18:32:34.144Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.23µs 18:36:08 prometheus | time=2025-06-15T18:32:34.144Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb 18:36:08 prometheus | time=2025-06-15T18:32:34.146Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=1.54537ms 18:36:08 prometheus | time=2025-06-15T18:32:34.146Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=26.751µs wal_replay_duration=1.585111ms wbl_replay_duration=290ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.23µs total_replay_duration=1.670473ms 18:36:08 prometheus | time=2025-06-15T18:32:34.149Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC 18:36:08 prometheus | time=2025-06-15T18:32:34.149Z level=INFO source=main.go:1290 msg="TSDB started" 18:36:08 prometheus | time=2025-06-15T18:32:34.149Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 18:36:08 prometheus | time=2025-06-15T18:32:34.150Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 18:36:08 prometheus | time=2025-06-15T18:32:34.150Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.16µs remote_storage=1.78µs web_handler=510ns query_engine=940ns scrape=218.556µs scrape_sd=216.685µs notify=173.075µs notify_sd=26.29µs rules=2.191µs tracing=7.11µs filename=/etc/prometheus/prometheus.yml totalDuration=1.15732ms 18:36:08 prometheus | time=2025-06-15T18:32:34.150Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." 18:36:08 prometheus | time=2025-06-15T18:32:34.150Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" 18:36:08 zookeeper | ===> User 18:36:08 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 18:36:08 zookeeper | ===> Configuring ... 18:36:08 zookeeper | ===> Running preflight checks ... 18:36:08 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 18:36:08 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 18:36:08 zookeeper | ===> Launching ... 18:36:08 zookeeper | ===> Launching zookeeper ... 18:36:08 zookeeper | [2025-06-15 18:32:38,082] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:08 zookeeper | [2025-06-15 18:32:38,084] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:08 zookeeper | [2025-06-15 18:32:38,084] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:08 zookeeper | [2025-06-15 18:32:38,085] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:08 zookeeper | [2025-06-15 18:32:38,085] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:08 zookeeper | [2025-06-15 18:32:38,086] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 18:36:08 zookeeper | [2025-06-15 18:32:38,086] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 18:36:08 zookeeper | [2025-06-15 18:32:38,086] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 18:36:08 zookeeper | [2025-06-15 18:32:38,086] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 18:36:08 zookeeper | [2025-06-15 18:32:38,087] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 18:36:08 zookeeper | [2025-06-15 18:32:38,087] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:08 zookeeper | [2025-06-15 18:32:38,087] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:08 zookeeper | [2025-06-15 18:32:38,087] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:08 zookeeper | [2025-06-15 18:32:38,087] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:08 zookeeper | [2025-06-15 18:32:38,087] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 18:36:08 zookeeper | [2025-06-15 18:32:38,088] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 18:36:08 zookeeper | [2025-06-15 18:32:38,097] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) 18:36:08 zookeeper | [2025-06-15 18:32:38,099] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 18:36:08 zookeeper | [2025-06-15 18:32:38,099] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 18:36:08 zookeeper | [2025-06-15 18:32:38,101] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 18:36:08 zookeeper | [2025-06-15 18:32:38,133] INFO (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,133] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,133] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,133] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,133] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,133] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,133] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,133] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,133] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,133] INFO (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,134] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,134] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,134] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,134] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,134] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,134] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,135] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,136] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 18:36:08 zookeeper | [2025-06-15 18:32:38,137] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,137] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,138] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 18:36:08 zookeeper | [2025-06-15 18:32:38,138] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 18:36:08 zookeeper | [2025-06-15 18:32:38,138] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 18:36:08 zookeeper | [2025-06-15 18:32:38,138] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 18:36:08 zookeeper | [2025-06-15 18:32:38,139] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 18:36:08 zookeeper | [2025-06-15 18:32:38,139] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 18:36:08 zookeeper | [2025-06-15 18:32:38,139] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 18:36:08 zookeeper | [2025-06-15 18:32:38,139] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 18:36:08 zookeeper | [2025-06-15 18:32:38,140] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,140] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,141] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 18:36:08 zookeeper | [2025-06-15 18:32:38,141] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 18:36:08 zookeeper | [2025-06-15 18:32:38,141] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,159] INFO Logging initialized @381ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 18:36:08 zookeeper | [2025-06-15 18:32:38,206] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 18:36:08 zookeeper | [2025-06-15 18:32:38,207] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 18:36:08 zookeeper | [2025-06-15 18:32:38,220] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) 18:36:08 zookeeper | [2025-06-15 18:32:38,255] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 18:36:08 zookeeper | [2025-06-15 18:32:38,255] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 18:36:08 zookeeper | [2025-06-15 18:32:38,256] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 18:36:08 zookeeper | [2025-06-15 18:32:38,259] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 18:36:08 zookeeper | [2025-06-15 18:32:38,267] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 18:36:08 zookeeper | [2025-06-15 18:32:38,275] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 18:36:08 zookeeper | [2025-06-15 18:32:38,275] INFO Started @501ms (org.eclipse.jetty.server.Server) 18:36:08 zookeeper | [2025-06-15 18:32:38,275] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,278] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 18:36:08 zookeeper | [2025-06-15 18:32:38,279] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 18:36:08 zookeeper | [2025-06-15 18:32:38,280] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 18:36:08 zookeeper | [2025-06-15 18:32:38,280] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 18:36:08 zookeeper | [2025-06-15 18:32:38,290] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 18:36:08 zookeeper | [2025-06-15 18:32:38,290] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 18:36:08 zookeeper | [2025-06-15 18:32:38,290] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 18:36:08 zookeeper | [2025-06-15 18:32:38,290] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 18:36:08 zookeeper | [2025-06-15 18:32:38,294] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 18:36:08 zookeeper | [2025-06-15 18:32:38,294] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 18:36:08 zookeeper | [2025-06-15 18:32:38,296] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 18:36:08 zookeeper | [2025-06-15 18:32:38,297] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 18:36:08 zookeeper | [2025-06-15 18:32:38,297] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 18:36:08 zookeeper | [2025-06-15 18:32:38,304] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 18:36:08 zookeeper | [2025-06-15 18:32:38,304] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 18:36:08 zookeeper | [2025-06-15 18:32:38,317] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 18:36:08 zookeeper | [2025-06-15 18:32:38,317] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 18:36:08 zookeeper | [2025-06-15 18:32:39,418] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 18:36:08 Tearing down containers... 18:36:08 Container policy-csit Stopping 18:36:08 Container policy-xacml-pdp Stopping 18:36:08 Container grafana Stopping 18:36:08 Container policy-csit Stopped 18:36:08 Container policy-csit Removing 18:36:08 Container policy-csit Removed 18:36:09 Container grafana Stopped 18:36:09 Container grafana Removing 18:36:09 Container grafana Removed 18:36:09 Container prometheus Stopping 18:36:09 Container prometheus Stopped 18:36:09 Container prometheus Removing 18:36:09 Container prometheus Removed 18:36:19 Container policy-xacml-pdp Stopped 18:36:19 Container policy-xacml-pdp Removing 18:36:19 Container policy-xacml-pdp Removed 18:36:19 Container policy-pap Stopping 18:36:29 Container policy-pap Stopped 18:36:29 Container policy-pap Removing 18:36:29 Container policy-pap Removed 18:36:29 Container kafka Stopping 18:36:29 Container policy-api Stopping 18:36:30 Container kafka Stopped 18:36:30 Container kafka Removing 18:36:30 Container kafka Removed 18:36:30 Container zookeeper Stopping 18:36:31 Container zookeeper Stopped 18:36:31 Container zookeeper Removing 18:36:31 Container zookeeper Removed 18:36:39 Container policy-api Stopped 18:36:39 Container policy-api Removing 18:36:39 Container policy-api Removed 18:36:39 Container policy-db-migrator Stopping 18:36:39 Container policy-db-migrator Stopped 18:36:39 Container policy-db-migrator Removing 18:36:39 Container policy-db-migrator Removed 18:36:39 Container postgres Stopping 18:36:40 Container postgres Stopped 18:36:40 Container postgres Removing 18:36:40 Container postgres Removed 18:36:40 Network compose_default Removing 18:36:40 Network compose_default Removed 18:36:40 $ ssh-agent -k 18:36:40 unset SSH_AUTH_SOCK; 18:36:40 unset SSH_AGENT_PID; 18:36:40 echo Agent pid 2089 killed; 18:36:40 [ssh-agent] Stopped. 18:36:40 Robot results publisher started... 18:36:40 INFO: Checking test criticality is deprecated and will be dropped in a future release! 18:36:40 -Parsing output xml: 18:36:40 Done! 18:36:40 -Copying log files to build dir: 18:36:41 Done! 18:36:41 -Assigning results to build: 18:36:41 Done! 18:36:41 -Checking thresholds: 18:36:41 Done! 18:36:41 Done publishing Robot results. 18:36:41 [PostBuildScript] - [INFO] Executing post build scripts. 18:36:41 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins17358743594577430593.sh 18:36:41 ---> sysstat.sh 18:36:41 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins2692188028950083490.sh 18:36:41 ---> package-listing.sh 18:36:41 ++ facter osfamily 18:36:41 ++ tr '[:upper:]' '[:lower:]' 18:36:41 + OS_FAMILY=debian 18:36:41 + workspace=/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp 18:36:41 + START_PACKAGES=/tmp/packages_start.txt 18:36:41 + END_PACKAGES=/tmp/packages_end.txt 18:36:41 + DIFF_PACKAGES=/tmp/packages_diff.txt 18:36:41 + PACKAGES=/tmp/packages_start.txt 18:36:41 + '[' /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp ']' 18:36:41 + PACKAGES=/tmp/packages_end.txt 18:36:41 + case "${OS_FAMILY}" in 18:36:41 + dpkg -l 18:36:41 + grep '^ii' 18:36:41 + '[' -f /tmp/packages_start.txt ']' 18:36:41 + '[' -f /tmp/packages_end.txt ']' 18:36:41 + diff /tmp/packages_start.txt /tmp/packages_end.txt 18:36:41 + '[' /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp ']' 18:36:41 + mkdir -p /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/archives/ 18:36:41 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/archives/ 18:36:41 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins375069099715408194.sh 18:36:41 ---> capture-instance-metadata.sh 18:36:41 Setup pyenv: 18:36:41 system 18:36:41 3.8.13 18:36:41 3.9.13 18:36:41 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) 18:36:41 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-pIkU from file:/tmp/.os_lf_venv 18:36:43 lf-activate-venv(): INFO: Installing: lftools 18:36:52 lf-activate-venv(): INFO: Adding /tmp/venv-pIkU/bin to PATH 18:36:52 INFO: Running in OpenStack, capturing instance metadata 18:36:52 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins11207763108767114992.sh 18:36:52 provisioning config files... 18:36:52 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp@tmp/config3960499273637298400tmp 18:36:52 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 18:36:52 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 18:36:52 [EnvInject] - Injecting environment variables from a build step. 18:36:52 [EnvInject] - Injecting as environment variables the properties content 18:36:52 SERVER_ID=logs 18:36:52 18:36:52 [EnvInject] - Variables injected successfully. 18:36:52 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins17750037851893405694.sh 18:36:52 ---> create-netrc.sh 18:36:53 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins4979658296247399722.sh 18:36:53 ---> python-tools-install.sh 18:36:53 Setup pyenv: 18:36:53 system 18:36:53 3.8.13 18:36:53 3.9.13 18:36:53 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) 18:36:53 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-pIkU from file:/tmp/.os_lf_venv 18:36:55 lf-activate-venv(): INFO: Installing: lftools 18:37:03 lf-activate-venv(): INFO: Adding /tmp/venv-pIkU/bin to PATH 18:37:03 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins13012390890590415984.sh 18:37:03 ---> sudo-logs.sh 18:37:03 Archiving 'sudo' log.. 18:37:03 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins3880433444349025126.sh 18:37:03 ---> job-cost.sh 18:37:03 Setup pyenv: 18:37:03 system 18:37:03 3.8.13 18:37:03 3.9.13 18:37:03 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) 18:37:03 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-pIkU from file:/tmp/.os_lf_venv 18:37:05 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 18:37:09 lf-activate-venv(): INFO: Adding /tmp/venv-pIkU/bin to PATH 18:37:09 INFO: No Stack... 18:37:10 INFO: Retrieving Pricing Info for: v3-standard-8 18:37:10 INFO: Archiving Costs 18:37:10 [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash -l /tmp/jenkins5789357321799545986.sh 18:37:10 ---> logs-deploy.sh 18:37:10 Setup pyenv: 18:37:10 system 18:37:10 3.8.13 18:37:10 3.9.13 18:37:10 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) 18:37:10 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-pIkU from file:/tmp/.os_lf_venv 18:37:12 lf-activate-venv(): INFO: Installing: lftools 18:37:20 lf-activate-venv(): INFO: Adding /tmp/venv-pIkU/bin to PATH 18:37:20 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-xacml-pdp-master-project-csit-xacml-pdp/2011 18:37:20 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 18:37:21 Archives upload complete. 18:37:21 INFO: archiving logs to Nexus 18:37:22 ---> uname -a: 18:37:22 Linux prd-ubuntu1804-docker-8c-8g-21442 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 18:37:22 18:37:22 18:37:22 ---> lscpu: 18:37:22 Architecture: x86_64 18:37:22 CPU op-mode(s): 32-bit, 64-bit 18:37:22 Byte Order: Little Endian 18:37:22 CPU(s): 8 18:37:22 On-line CPU(s) list: 0-7 18:37:22 Thread(s) per core: 1 18:37:22 Core(s) per socket: 1 18:37:22 Socket(s): 8 18:37:22 NUMA node(s): 1 18:37:22 Vendor ID: AuthenticAMD 18:37:22 CPU family: 23 18:37:22 Model: 49 18:37:22 Model name: AMD EPYC-Rome Processor 18:37:22 Stepping: 0 18:37:22 CPU MHz: 2799.998 18:37:22 BogoMIPS: 5599.99 18:37:22 Virtualization: AMD-V 18:37:22 Hypervisor vendor: KVM 18:37:22 Virtualization type: full 18:37:22 L1d cache: 32K 18:37:22 L1i cache: 32K 18:37:22 L2 cache: 512K 18:37:22 L3 cache: 16384K 18:37:22 NUMA node0 CPU(s): 0-7 18:37:22 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 18:37:22 18:37:22 18:37:22 ---> nproc: 18:37:22 8 18:37:22 18:37:22 18:37:22 ---> df -h: 18:37:22 Filesystem Size Used Avail Use% Mounted on 18:37:22 udev 16G 0 16G 0% /dev 18:37:22 tmpfs 3.2G 708K 3.2G 1% /run 18:37:22 /dev/vda1 155G 15G 141G 10% / 18:37:22 tmpfs 16G 0 16G 0% /dev/shm 18:37:22 tmpfs 5.0M 0 5.0M 0% /run/lock 18:37:22 tmpfs 16G 0 16G 0% /sys/fs/cgroup 18:37:22 /dev/vda15 105M 4.4M 100M 5% /boot/efi 18:37:22 tmpfs 3.2G 0 3.2G 0% /run/user/1001 18:37:22 18:37:22 18:37:22 ---> free -m: 18:37:22 total used free shared buff/cache available 18:37:22 Mem: 32167 890 24273 0 7003 30821 18:37:22 Swap: 1023 0 1023 18:37:22 18:37:22 18:37:22 ---> ip addr: 18:37:22 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 18:37:22 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 18:37:22 inet 127.0.0.1/8 scope host lo 18:37:22 valid_lft forever preferred_lft forever 18:37:22 inet6 ::1/128 scope host 18:37:22 valid_lft forever preferred_lft forever 18:37:22 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 18:37:22 link/ether fa:16:3e:a2:8e:ef brd ff:ff:ff:ff:ff:ff 18:37:22 inet 10.30.107.88/23 brd 10.30.107.255 scope global dynamic ens3 18:37:22 valid_lft 85966sec preferred_lft 85966sec 18:37:22 inet6 fe80::f816:3eff:fea2:8eef/64 scope link 18:37:22 valid_lft forever preferred_lft forever 18:37:22 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 18:37:22 link/ether 02:42:a5:d5:81:cb brd ff:ff:ff:ff:ff:ff 18:37:22 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 18:37:22 valid_lft forever preferred_lft forever 18:37:22 inet6 fe80::42:a5ff:fed5:81cb/64 scope link 18:37:22 valid_lft forever preferred_lft forever 18:37:22 18:37:22 18:37:22 ---> sar -b -r -n DEV: 18:37:22 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21442) 06/15/25 _x86_64_ (8 CPU) 18:37:22 18:37:22 18:30:11 LINUX RESTART (8 CPU) 18:37:22 18:37:22 18:31:01 tps rtps wtps bread/s bwrtn/s 18:37:22 18:32:01 209.72 20.28 189.44 2344.41 65932.34 18:37:22 18:33:01 610.46 3.43 607.03 430.86 168824.53 18:37:22 18:34:01 159.29 0.13 159.16 13.20 45067.96 18:37:22 18:35:01 107.95 0.25 107.70 17.73 42651.16 18:37:22 18:36:01 22.70 0.00 22.70 0.00 26078.85 18:37:22 18:37:01 79.27 1.32 77.95 106.25 10967.11 18:37:22 Average: 198.23 4.24 194.00 485.41 59920.32 18:37:22 18:37:22 18:31:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 18:37:22 18:32:01 25534768 31547444 7404452 22.48 116772 6057832 2280544 6.71 1078828 5836052 3467168 18:37:22 18:33:01 24246676 30562152 8692544 26.39 158452 6263520 6948020 20.44 2256516 5814596 904 18:37:22 18:34:01 22883356 29723384 10055864 30.53 179888 6725588 8226828 24.21 3205960 6164928 22640 18:37:22 18:35:01 22641432 29599732 10297788 31.26 200500 6809668 8379052 24.65 3379276 6220608 400 18:37:22 18:36:01 22635108 29593836 10304112 31.28 200620 6810164 8354356 24.58 3387932 6218772 356 18:37:22 18:37:01 24807292 31510220 8131928 24.69 202160 6547944 1672456 4.92 1536924 5974140 6960 18:37:22 Average: 23791439 30422795 9147781 27.77 176399 6535786 5976876 17.59 2474239 6038183 583071 18:37:22 18:37:22 18:31:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 18:37:22 18:32:01 ens3 1299.52 786.42 32923.20 64.94 0.00 0.00 0.00 0.00 18:37:22 18:32:01 lo 13.80 13.80 1.29 1.29 0.00 0.00 0.00 0.00 18:37:22 18:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:37:22 18:33:01 ens3 58.11 45.89 313.64 5.20 0.00 0.00 0.00 0.00 18:37:22 18:33:01 veth2092832 0.83 1.02 0.05 0.06 0.00 0.00 0.00 0.00 18:37:22 18:33:01 vethd4c0ccc 1.57 1.57 0.16 0.17 0.00 0.00 0.00 0.00 18:37:22 18:33:01 vethfc3cc7c 0.10 0.42 0.01 0.03 0.00 0.00 0.00 0.00 18:37:22 18:34:01 ens3 170.82 120.73 1905.55 8.86 0.00 0.00 0.00 0.00 18:37:22 18:34:01 veth2092832 142.21 142.64 16.81 34.39 0.00 0.00 0.00 0.00 18:37:22 18:34:01 vethd4c0ccc 17.26 14.05 2.16 2.18 0.00 0.00 0.00 0.00 18:37:22 18:34:01 vethfc3cc7c 0.42 0.47 0.05 1.00 0.00 0.00 0.00 0.00 18:37:22 18:35:01 vetha47a99a 1.17 1.08 0.57 0.34 0.00 0.00 0.00 0.00 18:37:22 18:35:01 ens3 57.52 44.26 296.16 4.85 0.00 0.00 0.00 0.00 18:37:22 18:35:01 veth2092832 220.63 222.88 24.25 41.44 0.00 0.00 0.00 0.00 18:37:22 18:35:01 vethd4c0ccc 14.61 10.46 1.27 1.52 0.00 0.00 0.00 0.00 18:37:22 18:36:01 vetha47a99a 0.93 0.83 0.12 0.16 0.00 0.00 0.00 0.00 18:37:22 18:36:01 ens3 0.90 0.77 0.26 0.31 0.00 0.00 0.00 0.00 18:37:22 18:36:01 veth2092832 274.05 276.17 30.03 51.73 0.00 0.00 0.00 0.00 18:37:22 18:36:01 vethd4c0ccc 14.86 10.35 1.25 1.53 0.00 0.00 0.00 0.00 18:37:22 18:37:01 ens3 52.97 43.24 69.24 31.67 0.00 0.00 0.00 0.00 18:37:22 18:37:01 lo 26.96 26.96 2.42 2.42 0.00 0.00 0.00 0.00 18:37:22 18:37:01 docker0 134.84 173.85 8.64 1347.66 0.00 0.00 0.00 0.00 18:37:22 Average: ens3 273.31 173.55 5918.01 19.30 0.00 0.00 0.00 0.00 18:37:22 Average: lo 3.78 3.78 0.34 0.34 0.00 0.00 0.00 0.00 18:37:22 Average: docker0 22.47 28.98 1.44 224.61 0.00 0.00 0.00 0.00 18:37:22 18:37:22 18:37:22 ---> sar -P ALL: 18:37:22 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21442) 06/15/25 _x86_64_ (8 CPU) 18:37:22 18:37:22 18:30:11 LINUX RESTART (8 CPU) 18:37:22 18:37:22 18:31:01 CPU %user %nice %system %iowait %steal %idle 18:37:22 18:32:01 all 16.23 0.00 4.58 4.44 0.05 74.70 18:37:22 18:32:01 0 20.16 0.00 4.96 9.68 0.07 65.13 18:37:22 18:32:01 1 19.47 0.00 3.68 1.34 0.05 75.46 18:37:22 18:32:01 2 31.23 0.00 5.87 3.98 0.05 58.87 18:37:22 18:32:01 3 20.15 0.00 4.77 1.98 0.07 73.02 18:37:22 18:32:01 4 10.65 0.00 4.06 2.14 0.03 83.11 18:37:22 18:32:01 5 9.26 0.00 4.39 1.03 0.03 85.29 18:37:22 18:32:01 6 9.19 0.00 4.73 13.88 0.03 72.18 18:37:22 18:32:01 7 9.83 0.00 4.16 1.48 0.03 84.51 18:37:22 18:33:01 all 16.60 0.00 4.26 10.57 0.07 68.50 18:37:22 18:33:01 0 14.98 0.00 4.27 2.56 0.07 78.12 18:37:22 18:33:01 1 15.55 0.00 6.05 36.50 0.08 41.82 18:37:22 18:33:01 2 13.90 0.00 3.98 5.05 0.10 76.97 18:37:22 18:33:01 3 17.59 0.00 4.00 13.29 0.07 65.05 18:37:22 18:33:01 4 15.95 0.00 4.01 3.14 0.07 76.83 18:37:22 18:33:01 5 16.70 0.00 3.96 7.27 0.07 72.00 18:37:22 18:33:01 6 22.23 0.00 4.28 9.89 0.07 63.53 18:37:22 18:33:01 7 15.89 0.00 3.60 6.93 0.07 73.52 18:37:22 18:34:01 all 17.73 0.00 2.25 2.78 0.07 77.17 18:37:22 18:34:01 0 21.13 0.00 3.30 2.11 0.08 73.36 18:37:22 18:34:01 1 15.12 0.00 2.06 1.29 0.08 81.45 18:37:22 18:34:01 2 19.60 0.00 2.93 1.98 0.07 75.43 18:37:22 18:34:01 3 18.59 0.00 2.05 1.16 0.07 78.14 18:37:22 18:34:01 4 20.59 0.00 1.94 0.62 0.07 76.78 18:37:22 18:34:01 5 14.20 0.00 1.70 0.47 0.08 83.55 18:37:22 18:34:01 6 16.87 0.00 1.79 0.18 0.07 81.08 18:37:22 18:34:01 7 15.69 0.00 2.28 14.45 0.08 67.49 18:37:22 18:35:01 all 10.02 0.00 1.68 2.41 0.07 85.83 18:37:22 18:35:01 0 8.08 0.00 1.86 3.19 0.05 86.82 18:37:22 18:35:01 1 9.41 0.00 1.59 4.81 0.07 84.12 18:37:22 18:35:01 2 6.52 0.00 1.48 0.07 0.07 91.86 18:37:22 18:35:01 3 9.71 0.00 1.76 0.13 0.05 88.35 18:37:22 18:35:01 4 10.31 0.00 1.54 0.27 0.07 87.81 18:37:22 18:35:01 5 7.37 0.00 1.74 6.11 0.08 84.69 18:37:22 18:35:01 6 18.44 0.00 1.56 3.56 0.08 76.35 18:37:22 18:35:01 7 10.16 0.00 1.86 1.08 0.07 86.83 18:37:22 18:36:01 all 1.42 0.00 0.30 1.27 0.04 96.98 18:37:22 18:36:01 0 1.08 0.00 0.42 2.07 0.07 96.36 18:37:22 18:36:01 1 3.03 0.00 0.23 0.03 0.03 96.67 18:37:22 18:36:01 2 1.22 0.00 0.20 0.02 0.03 98.53 18:37:22 18:36:01 3 0.85 0.00 0.32 0.02 0.02 98.80 18:37:22 18:36:01 4 1.27 0.00 0.10 0.07 0.03 98.53 18:37:22 18:36:01 5 1.53 0.00 0.42 7.89 0.05 90.11 18:37:22 18:36:01 6 0.63 0.00 0.13 0.03 0.07 99.13 18:37:22 18:36:01 7 1.70 0.00 0.60 0.00 0.03 97.66 18:37:22 18:37:01 all 4.97 0.00 0.76 0.75 0.04 93.49 18:37:22 18:37:01 0 23.68 0.00 1.08 3.62 0.05 71.56 18:37:22 18:37:01 1 1.77 0.00 0.68 1.50 0.03 96.01 18:37:22 18:37:01 2 3.49 0.00 0.94 0.43 0.03 95.10 18:37:22 18:37:01 3 1.50 0.00 0.60 0.12 0.02 97.76 18:37:22 18:37:01 4 1.22 0.00 0.63 0.12 0.02 98.01 18:37:22 18:37:01 5 1.67 0.00 0.72 0.05 0.03 97.52 18:37:22 18:37:01 6 1.39 0.00 0.57 0.03 0.03 97.98 18:37:22 18:37:01 7 5.05 0.00 0.79 0.08 0.03 94.05 18:37:22 Average: all 11.15 0.00 2.30 3.70 0.06 82.80 18:37:22 Average: 0 14.86 0.00 2.65 3.87 0.06 78.56 18:37:22 Average: 1 10.71 0.00 2.38 7.54 0.06 79.31 18:37:22 Average: 2 12.66 0.00 2.56 1.93 0.06 82.79 18:37:22 Average: 3 11.38 0.00 2.25 2.78 0.05 83.55 18:37:22 Average: 4 9.99 0.00 2.05 1.06 0.05 86.86 18:37:22 Average: 5 8.44 0.00 2.15 3.80 0.06 85.55 18:37:22 Average: 6 11.44 0.00 2.17 4.59 0.06 81.74 18:37:22 Average: 7 9.71 0.00 2.21 4.01 0.05 84.02 18:37:22 18:37:22 18:37:22