Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-23044 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-aAY9gS4tE9xY/agent.2034 SSH_AGENT_PID=2036 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp@tmp/private_key_13243661091895471621.key (/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp@tmp/private_key_13243661091895471621.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 8b99874d0fe646f509546f6b38b185b8f089ba50 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 8b99874d0fe646f509546f6b38b185b8f089ba50 # timeout=30 Commit message: "Add missing delete composition in CSIT" > git rev-list --no-walk 8b99874d0fe646f509546f6b38b185b8f089ba50 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins1270798518128545664.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-0TrN lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-0TrN/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-0TrN/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.41 botocore==1.38.41 bs4==0.0.2 cachetools==5.5.2 certifi==2025.6.15 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.3.1 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/sh /tmp/jenkins4670715133269447998.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/sh -xe /tmp/jenkins3886059081258269462.sh + /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/csit/run-project-csit.sh xacml-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 60.2M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.2M 100 60.2M 0 0 51.1M 0 0:00:01 0:00:01 --:--:-- 93.3M Setting project configuration for: xacml-pdp Configuring docker compose... Starting xacml-pdp using postgres + Grafana/Prometheus postgres Pulling zookeeper Pulling kafka Pulling prometheus Pulling policy-db-migrator Pulling pap Pulling xacml-pdp Pulling api Pulling grafana Pulling da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer c124ba1a8b26 Waiting 6394804c2196 Waiting d3165a332ae3 Waiting 1ec5fb03eaee Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 795b910b71c0 Pulling fs layer d1bdb495a7aa Pulling fs layer 0444d3911dbb Pulling fs layer b801adf990e2 Pulling fs layer b801adf990e2 Waiting d1bdb495a7aa Waiting 795b910b71c0 Waiting 0444d3911dbb Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 684be6598fc9 Waiting 0d92cad902ba Waiting dcc0c3b2850c Waiting eb7cda286a15 Waiting 5e06c6bed798 Waiting e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Verifying Checksum e5d7009d9e55 Download complete da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB da9db072f522 Pulling fs layer 6d64908bb8c7 Pulling fs layer 739d956095f0 Pulling fs layer 6ce075c32df1 Pulling fs layer 123d8160bc76 Pulling fs layer 6ff3b4b08cc9 Pulling fs layer be48959ad93c Pulling fs layer c70684a5e2f9 Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 6ff3b4b08cc9 Waiting 123d8160bc76 Waiting 6d64908bb8c7 Waiting 739d956095f0 Waiting 6ce075c32df1 Waiting be48959ad93c Waiting c70684a5e2f9 Waiting 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Download complete d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Verifying Checksum d3165a332ae3 Download complete f18232174bc9 Pulling fs layer 9183b65e90ee Pulling fs layer 3f8d5c908dcc Pulling fs layer 30bb92ff0608 Pulling fs layer 807a2e881ecd Pulling fs layer 4a4d0948b0bf Pulling fs layer 04f6155c873d Pulling fs layer 85dde7dceb0a Pulling fs layer 7009d5001b77 Pulling fs layer 538deb30e80c Pulling fs layer 30bb92ff0608 Waiting f18232174bc9 Waiting 4a4d0948b0bf Waiting 04f6155c873d Waiting 85dde7dceb0a Waiting 9183b65e90ee Waiting 7009d5001b77 Waiting 538deb30e80c Waiting 3f8d5c908dcc Waiting 807a2e881ecd Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer a83b68436f09 Pulling fs layer 787d6bee9571 Pulling fs layer 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer 7e568a0dc8fb Pulling fs layer e27c75a98748 Waiting e73cb4a42719 Waiting a83b68436f09 Waiting 787d6bee9571 Waiting 2d429b9e73a6 Waiting 13ff0988aaea Waiting 46eab5b44a35 Waiting 531ee2cf3c0c Waiting ed54a7dee1d8 Waiting 12c5c803443f Waiting c4d302cc468d Waiting 4b82842ab819 Waiting 01e0882c90d9 Waiting 7e568a0dc8fb Waiting 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer 44986281b8b9 Waiting 9fa9226be034 Waiting 6ac0e4adf315 Waiting 1ccde423731d Waiting f3b09c502777 Waiting 7221d93db8a9 Waiting 7df673c7455d Waiting bf70c5107ab5 Waiting 1617e25568b2 Waiting da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer 79161a3f5362 Waiting 9c266ba63f51 Waiting 2e8a7df9c2ee Waiting 10f05dd8b1db Waiting 41dac8b43ba6 Waiting 71a9f6a9ab4d Waiting c955f6e31a04 Pulling fs layer da3ed5db7103 Waiting c955f6e31a04 Waiting eca0188f477e Waiting e444bcd4d577 Waiting eabd8714fec9 Waiting 45fd2fec8a19 Waiting 8f10199ed94b Waiting 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer 5cfb27c10ea5 Waiting 1e017ebebdbd Waiting 40a5eed61bb0 Waiting e040ea11fa10 Waiting 09d5a3f70313 Waiting 356f5c2c843b Waiting 55f2b468da67 Waiting b0e0ef7895f4 Waiting c0c90eeb8aca Waiting 82bfc142787e Waiting 46baca71a4ef Waiting f963a77d2726 Waiting f3a82e9f1761 Waiting c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Verifying Checksum 6394804c2196 Download complete 795b910b71c0 Downloading [> ] 31.67kB/2.323MB 96e38c8865ba Downloading [=====> ] 7.568MB/71.91MB 96e38c8865ba Downloading [=====> ] 7.568MB/71.91MB 96e38c8865ba Downloading [=====> ] 7.568MB/71.91MB 795b910b71c0 Downloading [==================================================>] 2.323MB/2.323MB 795b910b71c0 Verifying Checksum 795b910b71c0 Download complete da9db072f522 Extracting [===========> ] 852kB/3.624MB da9db072f522 Extracting [===========> ] 852kB/3.624MB da9db072f522 Extracting [===========> ] 852kB/3.624MB da9db072f522 Extracting [===========> ] 852kB/3.624MB c124ba1a8b26 Downloading [=====> ] 10.27MB/91.87MB d1bdb495a7aa Downloading [> ] 539.6kB/58.78MB 96e38c8865ba Downloading [===============> ] 22.17MB/71.91MB 96e38c8865ba Downloading [===============> ] 22.17MB/71.91MB 96e38c8865ba Downloading [===============> ] 22.17MB/71.91MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB c124ba1a8b26 Downloading [===========> ] 21.63MB/91.87MB d1bdb495a7aa Downloading [====> ] 4.865MB/58.78MB 96e38c8865ba Downloading [===========================> ] 39.47MB/71.91MB 96e38c8865ba Downloading [===========================> ] 39.47MB/71.91MB 96e38c8865ba Downloading [===========================> ] 39.47MB/71.91MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete c124ba1a8b26 Downloading [===================> ] 36.22MB/91.87MB d1bdb495a7aa Downloading [===========> ] 13.52MB/58.78MB 96e38c8865ba Downloading [=======================================> ] 56.23MB/71.91MB 96e38c8865ba Downloading [=======================================> ] 56.23MB/71.91MB 96e38c8865ba Downloading [=======================================> ] 56.23MB/71.91MB c124ba1a8b26 Downloading [============================> ] 52.98MB/91.87MB 96e38c8865ba Verifying Checksum 96e38c8865ba Verifying Checksum 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Download complete 96e38c8865ba Download complete d1bdb495a7aa Downloading [===================> ] 23.25MB/58.78MB 0444d3911dbb Downloading [==================================================>] 1.2kB/1.2kB 0444d3911dbb Download complete b801adf990e2 Downloading [==================================================>] 1.17kB/1.17kB b801adf990e2 Download complete 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Verifying Checksum 5e06c6bed798 Download complete c124ba1a8b26 Downloading [====================================> ] 67.04MB/91.87MB 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Download complete 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB d1bdb495a7aa Downloading [==============================> ] 36.22MB/58.78MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB c124ba1a8b26 Downloading [===============================================> ] 86.51MB/91.87MB d1bdb495a7aa Downloading [=============================================> ] 54.07MB/58.78MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete d1bdb495a7aa Verifying Checksum d1bdb495a7aa Download complete eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Download complete dcc0c3b2850c Downloading [======> ] 9.19MB/76.12MB 739d956095f0 Downloading [> ] 146.4kB/14.64MB 6d64908bb8c7 Downloading [> ] 539.6kB/71.86MB 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB dcc0c3b2850c Downloading [===============> ] 24.33MB/76.12MB 739d956095f0 Downloading [============> ] 3.685MB/14.64MB 6d64908bb8c7 Downloading [===> ] 4.865MB/71.86MB 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB dcc0c3b2850c Downloading [=========================> ] 39.47MB/76.12MB 739d956095f0 Downloading [===========================> ] 7.962MB/14.64MB 6d64908bb8c7 Downloading [=======> ] 10.27MB/71.86MB 96e38c8865ba Extracting [=============> ] 19.5MB/71.91MB 96e38c8865ba Extracting [=============> ] 19.5MB/71.91MB 96e38c8865ba Extracting [=============> ] 19.5MB/71.91MB 739d956095f0 Verifying Checksum 739d956095f0 Download complete dcc0c3b2850c Downloading [====================================> ] 55.15MB/76.12MB 6d64908bb8c7 Downloading [=============> ] 20MB/71.86MB 6ce075c32df1 Downloading [==================================================>] 1.071kB/1.071kB 123d8160bc76 Downloading [============================> ] 3.003kB/5.239kB 123d8160bc76 Downloading [==================================================>] 5.239kB/5.239kB 123d8160bc76 Verifying Checksum 123d8160bc76 Download complete 6ff3b4b08cc9 Downloading [==================================================>] 1.032kB/1.032kB 6ff3b4b08cc9 Verifying Checksum 6ff3b4b08cc9 Download complete be48959ad93c Downloading [==================================================>] 1.033kB/1.033kB be48959ad93c Verifying Checksum be48959ad93c Download complete 96e38c8865ba Extracting [=================> ] 25.62MB/71.91MB 96e38c8865ba Extracting [=================> ] 25.62MB/71.91MB 96e38c8865ba Extracting [=================> ] 25.62MB/71.91MB dcc0c3b2850c Downloading [===============================================> ] 71.91MB/76.12MB c70684a5e2f9 Downloading [=======> ] 3.002kB/19.52kB c70684a5e2f9 Download complete 6d64908bb8c7 Downloading [=========================> ] 36.76MB/71.86MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete f18232174bc9 Downloading [> ] 48.06kB/3.642MB 9183b65e90ee Downloading [==================================================>] 141B/141B 9183b65e90ee Verifying Checksum 9183b65e90ee Download complete 3f8d5c908dcc Downloading [> ] 48.06kB/3.524MB 96e38c8865ba Extracting [======================> ] 31.75MB/71.91MB 96e38c8865ba Extracting [======================> ] 31.75MB/71.91MB 96e38c8865ba Extracting [======================> ] 31.75MB/71.91MB f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB 3f8d5c908dcc Downloading [==================================================>] 3.524MB/3.524MB 3f8d5c908dcc Verifying Checksum 3f8d5c908dcc Download complete 30bb92ff0608 Downloading [> ] 97.22kB/8.735MB 807a2e881ecd Downloading [==> ] 3.01kB/58.07kB 807a2e881ecd Verifying Checksum 807a2e881ecd Download complete 6d64908bb8c7 Downloading [===================================> ] 50.82MB/71.86MB 4a4d0948b0bf Downloading [=====> ] 3.01kB/27.78kB 4a4d0948b0bf Downloading [==================================================>] 27.78kB/27.78kB 4a4d0948b0bf Verifying Checksum 4a4d0948b0bf Download complete 04f6155c873d Downloading [> ] 539.6kB/107.3MB 30bb92ff0608 Verifying Checksum 30bb92ff0608 Download complete 96e38c8865ba Extracting [=========================> ] 36.77MB/71.91MB 96e38c8865ba Extracting [=========================> ] 36.77MB/71.91MB 96e38c8865ba Extracting [=========================> ] 36.77MB/71.91MB f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 85dde7dceb0a Downloading [> ] 539.6kB/63.48MB 6d64908bb8c7 Downloading [==============================================> ] 66.5MB/71.86MB 6d64908bb8c7 Verifying Checksum 6d64908bb8c7 Download complete 04f6155c873d Downloading [===> ] 7.028MB/107.3MB 7009d5001b77 Downloading [============> ] 3.01kB/11.92kB 7009d5001b77 Downloading [==================================================>] 11.92kB/11.92kB 7009d5001b77 Verifying Checksum 7009d5001b77 Download complete 538deb30e80c Downloading [==================================================>] 1.225kB/1.225kB 538deb30e80c Download complete f18232174bc9 Extracting [=================================================> ] 3.604MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 96e38c8865ba Extracting [============================> ] 41.22MB/71.91MB 96e38c8865ba Extracting [============================> ] 41.22MB/71.91MB 96e38c8865ba Extracting [============================> ] 41.22MB/71.91MB 85dde7dceb0a Downloading [======> ] 8.65MB/63.48MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB f18232174bc9 Pull complete 04f6155c873d Downloading [========> ] 18.92MB/107.3MB 9183b65e90ee Extracting [==================================================>] 141B/141B 6d64908bb8c7 Extracting [> ] 557.1kB/71.86MB 9183b65e90ee Extracting [==================================================>] 141B/141B 85dde7dceb0a Downloading [==============> ] 18.38MB/63.48MB 96e38c8865ba Extracting [==============================> ] 44.56MB/71.91MB 96e38c8865ba Extracting [==============================> ] 44.56MB/71.91MB 96e38c8865ba Extracting [==============================> ] 44.56MB/71.91MB 2d429b9e73a6 Downloading [=====> ] 2.948MB/29.13MB 04f6155c873d Downloading [===============> ] 34.06MB/107.3MB 6d64908bb8c7 Extracting [===> ] 4.456MB/71.86MB 85dde7dceb0a Downloading [=======================> ] 30.28MB/63.48MB 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB 9183b65e90ee Pull complete 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB 2d429b9e73a6 Downloading [=========> ] 5.602MB/29.13MB 3f8d5c908dcc Extracting [> ] 65.54kB/3.524MB 04f6155c873d Downloading [=======================> ] 51.36MB/107.3MB 6d64908bb8c7 Extracting [=====> ] 8.356MB/71.86MB 85dde7dceb0a Downloading [===============================> ] 40.01MB/63.48MB 2d429b9e73a6 Downloading [==============> ] 8.256MB/29.13MB 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 3f8d5c908dcc Extracting [=======> ] 524.3kB/3.524MB 04f6155c873d Downloading [===============================> ] 68.12MB/107.3MB 6d64908bb8c7 Extracting [========> ] 12.26MB/71.86MB 85dde7dceb0a Downloading [=========================================> ] 52.44MB/63.48MB 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 2d429b9e73a6 Downloading [========================> ] 14.15MB/29.13MB 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 04f6155c873d Downloading [========================================> ] 85.97MB/107.3MB 85dde7dceb0a Verifying Checksum 85dde7dceb0a Download complete 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete c4d302cc468d Downloading [> ] 48.06kB/4.534MB 6d64908bb8c7 Extracting [===========> ] 16.71MB/71.86MB 2d429b9e73a6 Downloading [============================================> ] 25.66MB/29.13MB 3f8d5c908dcc Pull complete 30bb92ff0608 Extracting [> ] 98.3kB/8.735MB 96e38c8865ba Extracting [=========================================> ] 59.6MB/71.91MB 96e38c8865ba Extracting [=========================================> ] 59.6MB/71.91MB 96e38c8865ba Extracting [=========================================> ] 59.6MB/71.91MB 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete 04f6155c873d Downloading [===============================================> ] 102.2MB/107.3MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 04f6155c873d Verifying Checksum 04f6155c873d Download complete 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 6d64908bb8c7 Extracting [===============> ] 21.73MB/71.86MB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Download complete ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 30bb92ff0608 Extracting [==> ] 393.2kB/8.735MB e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Download complete a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB 96e38c8865ba Extracting [============================================> ] 64.62MB/71.91MB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB 96e38c8865ba Extracting [============================================> ] 64.62MB/71.91MB 96e38c8865ba Extracting [============================================> ] 64.62MB/71.91MB a83b68436f09 Verifying Checksum a83b68436f09 Download complete 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 787d6bee9571 Downloading [==================================================>] 127B/127B 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 6d64908bb8c7 Extracting [=================> ] 25.62MB/71.86MB 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete 9fa9226be034 Downloading [> ] 15.3kB/783kB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Verifying Checksum 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Verifying Checksum 1617e25568b2 Download complete 30bb92ff0608 Extracting [===================> ] 3.441MB/8.735MB 96e38c8865ba Extracting [===============================================> ] 68.52MB/71.91MB 96e38c8865ba Extracting [===============================================> ] 68.52MB/71.91MB 96e38c8865ba Extracting [===============================================> ] 68.52MB/71.91MB f3b09c502777 Downloading [> ] 539.6kB/56.52MB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB e73cb4a42719 Downloading [====> ] 10.81MB/109.1MB 2d429b9e73a6 Extracting [=======> ] 4.129MB/29.13MB 6d64908bb8c7 Extracting [====================> ] 29.52MB/71.86MB 30bb92ff0608 Extracting [====================================> ] 6.39MB/8.735MB f3b09c502777 Downloading [====> ] 5.406MB/56.52MB 6ac0e4adf315 Downloading [===> ] 4.324MB/62.07MB e73cb4a42719 Downloading [==========> ] 22.17MB/109.1MB 2d429b9e73a6 Extracting [=============> ] 7.963MB/29.13MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 6d64908bb8c7 Extracting [======================> ] 32.31MB/71.86MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 30bb92ff0608 Extracting [==================================================>] 8.735MB/8.735MB f3b09c502777 Downloading [==========> ] 11.35MB/56.52MB 6ac0e4adf315 Downloading [======> ] 8.65MB/62.07MB e73cb4a42719 Downloading [================> ] 36.22MB/109.1MB 30bb92ff0608 Pull complete 807a2e881ecd Extracting [============================> ] 32.77kB/58.07kB 807a2e881ecd Extracting [==================================================>] 58.07kB/58.07kB 9fa9226be034 Pull complete 96e38c8865ba Pull complete 96e38c8865ba Pull complete 96e38c8865ba Pull complete 2d429b9e73a6 Extracting [=================> ] 10.32MB/29.13MB 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 795b910b71c0 Extracting [> ] 32.77kB/2.323MB e5d7009d9e55 Extracting [==================================================>] 295B/295B e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B 6d64908bb8c7 Extracting [========================> ] 35.65MB/71.86MB 5e06c6bed798 Extracting [==================================================>] 296B/296B 6ac0e4adf315 Downloading [==========> ] 12.43MB/62.07MB f3b09c502777 Downloading [===============> ] 17.84MB/56.52MB e73cb4a42719 Downloading [=========================> ] 54.61MB/109.1MB 2d429b9e73a6 Extracting [======================> ] 12.98MB/29.13MB 6d64908bb8c7 Extracting [=========================> ] 37.32MB/71.86MB 807a2e881ecd Pull complete 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB e73cb4a42719 Downloading [================================> ] 70.29MB/109.1MB 6ac0e4adf315 Downloading [==============> ] 17.84MB/62.07MB f3b09c502777 Downloading [======================> ] 25.95MB/56.52MB 5e06c6bed798 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB e5d7009d9e55 Pull complete 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 795b910b71c0 Extracting [=========> ] 458.8kB/2.323MB 2d429b9e73a6 Extracting [===========================> ] 15.93MB/29.13MB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 795b910b71c0 Extracting [==================================================>] 2.323MB/2.323MB 6d64908bb8c7 Extracting [===========================> ] 40.11MB/71.86MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB e73cb4a42719 Downloading [=====================================> ] 82.18MB/109.1MB f3b09c502777 Downloading [=================================> ] 38.39MB/56.52MB 6ac0e4adf315 Downloading [=====================> ] 27.03MB/62.07MB 795b910b71c0 Pull complete 2d429b9e73a6 Extracting [===============================> ] 18.58MB/29.13MB 4a4d0948b0bf Pull complete 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 6d64908bb8c7 Extracting [=============================> ] 42.34MB/71.86MB e73cb4a42719 Downloading [============================================> ] 96.78MB/109.1MB 1ec5fb03eaee Pull complete 6ac0e4adf315 Downloading [===============================> ] 38.93MB/62.07MB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB f3b09c502777 Downloading [==============================================> ] 52.44MB/56.52MB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 1617e25568b2 Pull complete 2d429b9e73a6 Extracting [====================================> ] 21.23MB/29.13MB f3b09c502777 Verifying Checksum f3b09c502777 Download complete e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete d1bdb495a7aa Extracting [> ] 557.1kB/58.78MB 04f6155c873d Extracting [> ] 557.1kB/107.3MB 6d64908bb8c7 Extracting [===============================> ] 44.56MB/71.86MB 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Verifying Checksum 408012a7b118 Download complete 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 6ac0e4adf315 Downloading [======================================> ] 48.12MB/62.07MB 44986281b8b9 Download complete bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Verifying Checksum 1ccde423731d Download complete 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete eca0188f477e Downloading [> ] 375.7kB/37.17MB e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete 2d429b9e73a6 Extracting [==========================================> ] 24.48MB/29.13MB d3165a332ae3 Pull complete d1bdb495a7aa Extracting [======> ] 7.242MB/58.78MB 0d92cad902ba Pull complete 6ac0e4adf315 Downloading [================================================> ] 60.01MB/62.07MB eabd8714fec9 Downloading [> ] 539.6kB/375MB 6d64908bb8c7 Extracting [================================> ] 46.79MB/71.86MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete 04f6155c873d Extracting [=> ] 3.342MB/107.3MB 45fd2fec8a19 Download complete eca0188f477e Downloading [========> ] 6.028MB/37.17MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB d1bdb495a7aa Extracting [==========> ] 12.26MB/58.78MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB eabd8714fec9 Downloading [=> ] 8.65MB/375MB 6d64908bb8c7 Extracting [==================================> ] 49.58MB/71.86MB eca0188f477e Downloading [====================> ] 15.07MB/37.17MB 2d429b9e73a6 Extracting [=============================================> ] 26.54MB/29.13MB 8f10199ed94b Downloading [================> ] 2.948MB/8.768MB 04f6155c873d Extracting [==> ] 5.571MB/107.3MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB c124ba1a8b26 Extracting [====> ] 8.356MB/91.87MB dcc0c3b2850c Extracting [====> ] 6.685MB/76.12MB eabd8714fec9 Downloading [==> ] 16.22MB/375MB 6d64908bb8c7 Extracting [===================================> ] 51.25MB/71.86MB d1bdb495a7aa Extracting [================> ] 19.5MB/58.78MB eca0188f477e Downloading [=====================================> ] 27.88MB/37.17MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete 04f6155c873d Extracting [===> ] 7.242MB/107.3MB 2d429b9e73a6 Extracting [================================================> ] 28.02MB/29.13MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Verifying Checksum f963a77d2726 Download complete c124ba1a8b26 Extracting [=======> ] 13.93MB/91.87MB dcc0c3b2850c Extracting [=======> ] 11.14MB/76.12MB 6ac0e4adf315 Extracting [===> ] 3.899MB/62.07MB d1bdb495a7aa Extracting [===================> ] 23.4MB/58.78MB eabd8714fec9 Downloading [===> ] 26.49MB/375MB eca0188f477e Verifying Checksum eca0188f477e Download complete 6d64908bb8c7 Extracting [====================================> ] 52.92MB/71.86MB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete 04f6155c873d Extracting [====> ] 9.47MB/107.3MB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Download complete c124ba1a8b26 Extracting [==========> ] 20.05MB/91.87MB 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete dcc0c3b2850c Extracting [==========> ] 16.15MB/76.12MB 6ac0e4adf315 Extracting [====> ] 5.571MB/62.07MB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete eabd8714fec9 Downloading [====> ] 36.76MB/375MB d1bdb495a7aa Extracting [========================> ] 28.97MB/58.78MB f3a82e9f1761 Downloading [==> ] 2.293MB/44.41MB 41dac8b43ba6 Download complete 6d64908bb8c7 Extracting [======================================> ] 55.71MB/71.86MB 04f6155c873d Extracting [=====> ] 11.7MB/107.3MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB eca0188f477e Extracting [> ] 393.2kB/37.17MB c124ba1a8b26 Extracting [=============> ] 25.62MB/91.87MB dcc0c3b2850c Extracting [==============> ] 21.73MB/76.12MB 6ac0e4adf315 Extracting [=====> ] 7.242MB/62.07MB eabd8714fec9 Downloading [======> ] 51.36MB/375MB d1bdb495a7aa Extracting [===========================> ] 32.87MB/58.78MB f3a82e9f1761 Downloading [=====> ] 5.045MB/44.41MB 04f6155c873d Extracting [======> ] 13.93MB/107.3MB 6d64908bb8c7 Extracting [========================================> ] 57.93MB/71.86MB eca0188f477e Extracting [===> ] 2.753MB/37.17MB c124ba1a8b26 Extracting [================> ] 31.2MB/91.87MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB dcc0c3b2850c Extracting [=================> ] 26.18MB/76.12MB eabd8714fec9 Downloading [========> ] 65.42MB/375MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 6ac0e4adf315 Extracting [=======> ] 8.913MB/62.07MB d1bdb495a7aa Extracting [================================> ] 37.88MB/58.78MB f3a82e9f1761 Downloading [===========> ] 10.55MB/44.41MB 04f6155c873d Extracting [=======> ] 15.04MB/107.3MB 6d64908bb8c7 Extracting [=========================================> ] 59.6MB/71.86MB dcc0c3b2850c Extracting [====================> ] 31.75MB/76.12MB eca0188f477e Extracting [=======> ] 5.505MB/37.17MB c124ba1a8b26 Extracting [====================> ] 37.32MB/91.87MB da3ed5db7103 Downloading [=> ] 4.324MB/127.4MB eabd8714fec9 Downloading [==========> ] 81.1MB/375MB d1bdb495a7aa Extracting [===================================> ] 41.78MB/58.78MB 2d429b9e73a6 Pull complete 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB f3a82e9f1761 Downloading [==================> ] 16.06MB/44.41MB 6ac0e4adf315 Extracting [========> ] 11.14MB/62.07MB dcc0c3b2850c Extracting [=======================> ] 35.09MB/76.12MB eca0188f477e Extracting [==========> ] 7.471MB/37.17MB 6d64908bb8c7 Extracting [===========================================> ] 62.39MB/71.86MB 04f6155c873d Extracting [=======> ] 16.71MB/107.3MB da3ed5db7103 Downloading [===> ] 8.109MB/127.4MB d1bdb495a7aa Extracting [=========================================> ] 48.46MB/58.78MB eabd8714fec9 Downloading [============> ] 94.08MB/375MB c124ba1a8b26 Extracting [======================> ] 41.22MB/91.87MB f3a82e9f1761 Downloading [=========================> ] 22.94MB/44.41MB 6ac0e4adf315 Extracting [==========> ] 13.37MB/62.07MB eca0188f477e Extracting [==============> ] 10.62MB/37.17MB dcc0c3b2850c Extracting [=========================> ] 39.55MB/76.12MB da3ed5db7103 Downloading [====> ] 11.89MB/127.4MB d1bdb495a7aa Extracting [==============================================> ] 55.15MB/58.78MB eabd8714fec9 Downloading [=============> ] 103.8MB/375MB c124ba1a8b26 Extracting [=========================> ] 47.35MB/91.87MB 6d64908bb8c7 Extracting [============================================> ] 64.62MB/71.86MB f3a82e9f1761 Downloading [==============================> ] 27.07MB/44.41MB 46eab5b44a35 Pull complete c4d302cc468d Extracting [> ] 65.54kB/4.534MB d1bdb495a7aa Extracting [==================================================>] 58.78MB/58.78MB 6ac0e4adf315 Extracting [============> ] 15.6MB/62.07MB 04f6155c873d Extracting [========> ] 17.83MB/107.3MB eca0188f477e Extracting [==================> ] 13.76MB/37.17MB dcc0c3b2850c Extracting [============================> ] 44.01MB/76.12MB d1bdb495a7aa Pull complete 0444d3911dbb Extracting [==================================================>] 1.2kB/1.2kB 0444d3911dbb Extracting [==================================================>] 1.2kB/1.2kB da3ed5db7103 Downloading [======> ] 15.68MB/127.4MB eabd8714fec9 Downloading [===============> ] 115.7MB/375MB c124ba1a8b26 Extracting [============================> ] 51.81MB/91.87MB f3a82e9f1761 Downloading [====================================> ] 32.11MB/44.41MB 6d64908bb8c7 Extracting [==============================================> ] 67.4MB/71.86MB 04f6155c873d Extracting [=========> ] 19.5MB/107.3MB 6ac0e4adf315 Extracting [==============> ] 17.83MB/62.07MB c4d302cc468d Extracting [===> ] 327.7kB/4.534MB dcc0c3b2850c Extracting [===============================> ] 47.91MB/76.12MB eca0188f477e Extracting [=====================> ] 16.12MB/37.17MB da3ed5db7103 Downloading [=======> ] 19.46MB/127.4MB eabd8714fec9 Downloading [================> ] 127.1MB/375MB f3a82e9f1761 Downloading [=========================================> ] 37.16MB/44.41MB c124ba1a8b26 Extracting [================================> ] 60.16MB/91.87MB 6d64908bb8c7 Extracting [================================================> ] 70.19MB/71.86MB 04f6155c873d Extracting [==========> ] 22.28MB/107.3MB c4d302cc468d Extracting [=================================> ] 3.015MB/4.534MB dcc0c3b2850c Extracting [===================================> ] 53.48MB/76.12MB 6ac0e4adf315 Extracting [==================> ] 22.84MB/62.07MB eca0188f477e Extracting [========================> ] 18.09MB/37.17MB 6d64908bb8c7 Extracting [==================================================>] 71.86MB/71.86MB eabd8714fec9 Downloading [==================> ] 136.8MB/375MB f3a82e9f1761 Downloading [===============================================> ] 41.75MB/44.41MB da3ed5db7103 Downloading [=========> ] 23.25MB/127.4MB c124ba1a8b26 Extracting [====================================> ] 66.85MB/91.87MB 6d64908bb8c7 Extracting [==================================================>] 71.86MB/71.86MB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB f3a82e9f1761 Verifying Checksum 0444d3911dbb Pull complete b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB dcc0c3b2850c Extracting [======================================> ] 57.93MB/76.12MB c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete eabd8714fec9 Downloading [===================> ] 146.5MB/375MB eca0188f477e Extracting [===========================> ] 20.45MB/37.17MB 04f6155c873d Extracting [===========> ] 24.51MB/107.3MB 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB da3ed5db7103 Downloading [===========> ] 28.65MB/127.4MB c124ba1a8b26 Extracting [======================================> ] 70.75MB/91.87MB 6d64908bb8c7 Pull complete c4d302cc468d Pull complete 739d956095f0 Extracting [> ] 163.8kB/14.64MB 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB dcc0c3b2850c Extracting [============================================> ] 67.4MB/76.12MB b801adf990e2 Pull complete 04f6155c873d Extracting [=============> ] 28.97MB/107.3MB eabd8714fec9 Downloading [=====================> ] 159.5MB/375MB eca0188f477e Extracting [===============================> ] 23.59MB/37.17MB da3ed5db7103 Downloading [==============> ] 36.76MB/127.4MB c124ba1a8b26 Extracting [========================================> ] 75.2MB/91.87MB xacml-pdp Pulled 6ac0e4adf315 Extracting [====================> ] 25.62MB/62.07MB 1e017ebebdbd Downloading [===========> ] 8.289MB/37.19MB 739d956095f0 Extracting [=> ] 327.7kB/14.64MB dcc0c3b2850c Extracting [================================================> ] 73.53MB/76.12MB 04f6155c873d Extracting [===============> ] 33.42MB/107.3MB eabd8714fec9 Downloading [======================> ] 167.6MB/375MB da3ed5db7103 Downloading [=================> ] 44.87MB/127.4MB eca0188f477e Extracting [===================================> ] 26.74MB/37.17MB c124ba1a8b26 Extracting [==========================================> ] 78.54MB/91.87MB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 6ac0e4adf315 Extracting [========================> ] 30.08MB/62.07MB 1e017ebebdbd Downloading [=====================> ] 15.83MB/37.19MB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 739d956095f0 Extracting [============> ] 3.768MB/14.64MB dcc0c3b2850c Pull complete eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 04f6155c873d Extracting [================> ] 35.09MB/107.3MB eabd8714fec9 Downloading [=======================> ] 173.6MB/375MB da3ed5db7103 Downloading [====================> ] 52.98MB/127.4MB c124ba1a8b26 Extracting [=============================================> ] 83MB/91.87MB 01e0882c90d9 Pull complete 6ac0e4adf315 Extracting [=========================> ] 31.75MB/62.07MB 1e017ebebdbd Downloading [==============================> ] 22.99MB/37.19MB 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB eca0188f477e Extracting [=======================================> ] 29.49MB/37.17MB 739d956095f0 Extracting [=================> ] 5.243MB/14.64MB eabd8714fec9 Downloading [========================> ] 183.3MB/375MB c124ba1a8b26 Extracting [================================================> ] 89.13MB/91.87MB da3ed5db7103 Downloading [=========================> ] 65.96MB/127.4MB 04f6155c873d Extracting [=================> ] 37.88MB/107.3MB 6ac0e4adf315 Extracting [============================> ] 35.09MB/62.07MB 1e017ebebdbd Downloading [=============================================> ] 33.54MB/37.19MB eca0188f477e Extracting [=========================================> ] 31.06MB/37.17MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 739d956095f0 Extracting [======================> ] 6.554MB/14.64MB 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB eabd8714fec9 Downloading [==========================> ] 196.3MB/375MB eb7cda286a15 Pull complete 55f2b468da67 Downloading [> ] 539.6kB/257.9MB c124ba1a8b26 Pull complete da3ed5db7103 Downloading [==============================> ] 77.32MB/127.4MB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6ac0e4adf315 Extracting [===================================> ] 43.45MB/62.07MB api Pulled 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB eca0188f477e Extracting [============================================> ] 33.42MB/37.17MB 04f6155c873d Extracting [==================> ] 40.67MB/107.3MB 531ee2cf3c0c Extracting [==================> ] 3.047MB/8.066MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB eabd8714fec9 Downloading [===========================> ] 209.2MB/375MB 739d956095f0 Extracting [===========================> ] 8.192MB/14.64MB 55f2b468da67 Downloading [=> ] 5.406MB/257.9MB da3ed5db7103 Downloading [=================================> ] 84.88MB/127.4MB 6ac0e4adf315 Extracting [========================================> ] 50.14MB/62.07MB 04f6155c873d Extracting [===================> ] 42.89MB/107.3MB eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB eabd8714fec9 Downloading [=============================> ] 223.3MB/375MB 1e017ebebdbd Extracting [===> ] 2.753MB/37.19MB 531ee2cf3c0c Extracting [=============================> ] 4.719MB/8.066MB 739d956095f0 Extracting [=============================> ] 8.52MB/14.64MB 55f2b468da67 Downloading [==> ] 14.6MB/257.9MB 6ac0e4adf315 Extracting [===============================================> ] 58.49MB/62.07MB da3ed5db7103 Downloading [=====================================> ] 96.24MB/127.4MB 6394804c2196 Pull complete 04f6155c873d Extracting [=====================> ] 45.12MB/107.3MB eca0188f477e Extracting [================================================> ] 35.78MB/37.17MB pap Pulled eabd8714fec9 Downloading [==============================> ] 231.4MB/375MB 1e017ebebdbd Extracting [=======> ] 5.505MB/37.19MB 531ee2cf3c0c Extracting [====================================> ] 5.898MB/8.066MB 739d956095f0 Extracting [=====================================> ] 10.98MB/14.64MB 55f2b468da67 Downloading [====> ] 24.33MB/257.9MB da3ed5db7103 Downloading [==========================================> ] 108.1MB/127.4MB 04f6155c873d Extracting [======================> ] 47.35MB/107.3MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB eabd8714fec9 Downloading [================================> ] 241.7MB/375MB 531ee2cf3c0c Extracting [================================================> ] 7.864MB/8.066MB 1e017ebebdbd Extracting [=========> ] 7.078MB/37.19MB 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 55f2b468da67 Downloading [=======> ] 37.31MB/257.9MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB da3ed5db7103 Downloading [===============================================> ] 122.2MB/127.4MB 739d956095f0 Extracting [========================================> ] 11.96MB/14.64MB eca0188f477e Pull complete 04f6155c873d Extracting [=======================> ] 49.58MB/107.3MB e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B 531ee2cf3c0c Pull complete ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 6ac0e4adf315 Pull complete eabd8714fec9 Downloading [==================================> ] 256.3MB/375MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete 1e017ebebdbd Extracting [=============> ] 9.83MB/37.19MB 739d956095f0 Extracting [==================================================>] 14.64MB/14.64MB 739d956095f0 Pull complete 6ce075c32df1 Extracting [==================================================>] 1.071kB/1.071kB 04f6155c873d Extracting [========================> ] 52.92MB/107.3MB 6ce075c32df1 Extracting [==================================================>] 1.071kB/1.071kB f3b09c502777 Extracting [> ] 557.1kB/56.52MB ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 1e017ebebdbd Extracting [===================> ] 14.16MB/37.19MB 55f2b468da67 Downloading [========> ] 43.25MB/257.9MB eabd8714fec9 Downloading [==================================> ] 257.9MB/375MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB e444bcd4d577 Pull complete 82bfc142787e Downloading [> ] 97.22kB/8.613MB ed54a7dee1d8 Pull complete 12c5c803443f Extracting [==================================================>] 116B/116B 04f6155c873d Extracting [=========================> ] 55.15MB/107.3MB 12c5c803443f Extracting [==================================================>] 116B/116B f3b09c502777 Extracting [==> ] 3.342MB/56.52MB 55f2b468da67 Downloading [===========> ] 57.85MB/257.9MB eabd8714fec9 Downloading [====================================> ] 271.4MB/375MB 6ce075c32df1 Pull complete 1e017ebebdbd Extracting [=======================> ] 17.69MB/37.19MB 123d8160bc76 Extracting [==================================================>] 5.239kB/5.239kB 123d8160bc76 Extracting [==================================================>] 5.239kB/5.239kB 82bfc142787e Downloading [==========> ] 1.768MB/8.613MB 04f6155c873d Extracting [===========================> ] 59.05MB/107.3MB f3b09c502777 Extracting [=====> ] 6.685MB/56.52MB 55f2b468da67 Downloading [==============> ] 72.45MB/257.9MB eabd8714fec9 Downloading [======================================> ] 287.1MB/375MB 1e017ebebdbd Extracting [===========================> ] 20.45MB/37.19MB 12c5c803443f Pull complete e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 82bfc142787e Downloading [==================> ] 3.243MB/8.613MB 123d8160bc76 Pull complete 6ff3b4b08cc9 Extracting [==================================================>] 1.032kB/1.032kB 6ff3b4b08cc9 Extracting [==================================================>] 1.032kB/1.032kB 04f6155c873d Extracting [=============================> ] 62.39MB/107.3MB 55f2b468da67 Downloading [================> ] 87.59MB/257.9MB eabd8714fec9 Downloading [========================================> ] 302.8MB/375MB 1e017ebebdbd Extracting [================================> ] 23.99MB/37.19MB f3b09c502777 Extracting [=======> ] 8.913MB/56.52MB 82bfc142787e Downloading [===============================> ] 5.406MB/8.613MB e27c75a98748 Pull complete 82bfc142787e Verifying Checksum 82bfc142787e Download complete 55f2b468da67 Downloading [===================> ] 101.1MB/257.9MB 1e017ebebdbd Extracting [=====================================> ] 27.53MB/37.19MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Download complete eabd8714fec9 Downloading [==========================================> ] 318.5MB/375MB 04f6155c873d Extracting [==============================> ] 66.29MB/107.3MB 6ff3b4b08cc9 Pull complete be48959ad93c Extracting [==================================================>] 1.033kB/1.033kB f3b09c502777 Extracting [==========> ] 12.26MB/56.52MB be48959ad93c Extracting [==================================================>] 1.033kB/1.033kB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 55f2b468da67 Downloading [======================> ] 117.3MB/257.9MB 1e017ebebdbd Extracting [=========================================> ] 30.67MB/37.19MB eabd8714fec9 Downloading [============================================> ] 330.3MB/375MB 04f6155c873d Extracting [===============================> ] 68.52MB/107.3MB b0e0ef7895f4 Downloading [==========> ] 7.536MB/37.01MB f3b09c502777 Extracting [=============> ] 15.6MB/56.52MB 55f2b468da67 Downloading [=========================> ] 131.9MB/257.9MB eabd8714fec9 Downloading [=============================================> ] 344.4MB/375MB be48959ad93c Pull complete c70684a5e2f9 Extracting [==================================================>] 19.52kB/19.52kB e73cb4a42719 Extracting [=> ] 3.899MB/109.1MB c70684a5e2f9 Extracting [==================================================>] 19.52kB/19.52kB b0e0ef7895f4 Downloading [=========================> ] 18.84MB/37.01MB 1e017ebebdbd Extracting [=============================================> ] 33.82MB/37.19MB 04f6155c873d Extracting [=================================> ] 71.86MB/107.3MB f3b09c502777 Extracting [================> ] 18.38MB/56.52MB 55f2b468da67 Downloading [============================> ] 147.1MB/257.9MB eabd8714fec9 Downloading [================================================> ] 361.2MB/375MB e73cb4a42719 Extracting [==> ] 6.128MB/109.1MB b0e0ef7895f4 Downloading [============================================> ] 33.16MB/37.01MB 1e017ebebdbd Extracting [===============================================> ] 35MB/37.19MB 04f6155c873d Extracting [==================================> ] 74.09MB/107.3MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete eabd8714fec9 Download complete 55f2b468da67 Downloading [==============================> ] 159MB/257.9MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB e73cb4a42719 Extracting [===> ] 8.356MB/109.1MB c0c90eeb8aca Download complete 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB 04f6155c873d Extracting [===================================> ] 76.87MB/107.3MB 40a5eed61bb0 Downloading [==================================================>] 98B/98B e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Download complete 55f2b468da67 Downloading [=================================> ] 171.4MB/257.9MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB c70684a5e2f9 Pull complete 1e017ebebdbd Pull complete e73cb4a42719 Extracting [=====> ] 11.14MB/109.1MB 04f6155c873d Extracting [====================================> ] 78.54MB/107.3MB f3b09c502777 Extracting [====================> ] 22.84MB/56.52MB policy-db-migrator Pulled eabd8714fec9 Extracting [> ] 557.1kB/375MB 55f2b468da67 Downloading [===================================> ] 181.1MB/257.9MB 09d5a3f70313 Downloading [===> ] 8.65MB/109.2MB 04f6155c873d Extracting [=====================================> ] 80.77MB/107.3MB f3b09c502777 Extracting [=======================> ] 26.74MB/56.52MB e73cb4a42719 Extracting [======> ] 15.04MB/109.1MB eabd8714fec9 Extracting [=> ] 13.37MB/375MB 55f2b468da67 Downloading [=====================================> ] 195.7MB/257.9MB 09d5a3f70313 Downloading [==========> ] 22.17MB/109.2MB f3b09c502777 Extracting [=============================> ] 33.42MB/56.52MB 04f6155c873d Extracting [======================================> ] 83.56MB/107.3MB e73cb4a42719 Extracting [========> ] 18.94MB/109.1MB eabd8714fec9 Extracting [==> ] 18.94MB/375MB 55f2b468da67 Downloading [========================================> ] 208.2MB/257.9MB 09d5a3f70313 Downloading [===============> ] 34.06MB/109.2MB f3b09c502777 Extracting [=========================================> ] 46.79MB/56.52MB e73cb4a42719 Extracting [==========> ] 22.28MB/109.1MB 04f6155c873d Extracting [========================================> ] 87.46MB/107.3MB eabd8714fec9 Extracting [===> ] 22.84MB/375MB 55f2b468da67 Downloading [===========================================> ] 222.2MB/257.9MB 09d5a3f70313 Downloading [=======================> ] 50.28MB/109.2MB e73cb4a42719 Extracting [===========> ] 25.07MB/109.1MB 04f6155c873d Extracting [===========================================> ] 93.03MB/107.3MB eabd8714fec9 Extracting [===> ] 24.51MB/375MB f3b09c502777 Extracting [=================================================> ] 55.71MB/56.52MB 55f2b468da67 Downloading [=============================================> ] 234.1MB/257.9MB 09d5a3f70313 Downloading [=============================> ] 64.88MB/109.2MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB e73cb4a42719 Extracting [============> ] 27.3MB/109.1MB eabd8714fec9 Extracting [====> ] 33.98MB/375MB 04f6155c873d Extracting [=============================================> ] 96.93MB/107.3MB 55f2b468da67 Downloading [================================================> ] 247.6MB/257.9MB 09d5a3f70313 Downloading [====================================> ] 79.48MB/109.2MB e73cb4a42719 Extracting [=============> ] 30.08MB/109.1MB f3b09c502777 Pull complete 04f6155c873d Extracting [==============================================> ] 99.16MB/107.3MB eabd8714fec9 Extracting [=====> ] 41.78MB/375MB 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B 55f2b468da67 Downloading [=================================================> ] 256.3MB/257.9MB 09d5a3f70313 Downloading [======================================> ] 84.34MB/109.2MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete e73cb4a42719 Extracting [==============> ] 32.31MB/109.1MB eabd8714fec9 Extracting [======> ] 50.14MB/375MB 04f6155c873d Extracting [===============================================> ] 101.4MB/107.3MB 09d5a3f70313 Downloading [==============================================> ] 102.2MB/109.2MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB e73cb4a42719 Extracting [================> ] 35.65MB/109.1MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete eabd8714fec9 Extracting [=======> ] 55.71MB/375MB 04f6155c873d Extracting [================================================> ] 103.1MB/107.3MB 55f2b468da67 Extracting [=> ] 9.47MB/257.9MB e73cb4a42719 Extracting [==================> ] 39.55MB/109.1MB eabd8714fec9 Extracting [========> ] 65.18MB/375MB 408012a7b118 Pull complete 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 55f2b468da67 Extracting [===> ] 18.38MB/257.9MB 04f6155c873d Extracting [================================================> ] 104.2MB/107.3MB e73cb4a42719 Extracting [===================> ] 42.89MB/109.1MB eabd8714fec9 Extracting [=========> ] 72.97MB/375MB 55f2b468da67 Extracting [====> ] 22.28MB/257.9MB 04f6155c873d Extracting [=================================================> ] 105.3MB/107.3MB e73cb4a42719 Extracting [=====================> ] 46.79MB/109.1MB eabd8714fec9 Extracting [===========> ] 83MB/375MB 04f6155c873d Extracting [==================================================>] 107.3MB/107.3MB eabd8714fec9 Extracting [============> ] 90.8MB/375MB 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB e73cb4a42719 Extracting [=======================> ] 50.69MB/109.1MB 44986281b8b9 Pull complete 04f6155c873d Pull complete bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 55f2b468da67 Extracting [=====> ] 28.97MB/257.9MB eabd8714fec9 Extracting [============> ] 95.26MB/375MB 55f2b468da67 Extracting [=======> ] 36.21MB/257.9MB eabd8714fec9 Extracting [=============> ] 99.71MB/375MB e73cb4a42719 Extracting [========================> ] 52.36MB/109.1MB 85dde7dceb0a Extracting [> ] 557.1kB/63.48MB 55f2b468da67 Extracting [========> ] 42.34MB/257.9MB eabd8714fec9 Extracting [=============> ] 104.7MB/375MB e73cb4a42719 Extracting [========================> ] 54.03MB/109.1MB 55f2b468da67 Extracting [=========> ] 49.58MB/257.9MB eabd8714fec9 Extracting [==============> ] 108.1MB/375MB 85dde7dceb0a Extracting [> ] 1.114MB/63.48MB e73cb4a42719 Extracting [=========================> ] 56.26MB/109.1MB 55f2b468da67 Extracting [===========> ] 59.6MB/257.9MB eabd8714fec9 Extracting [==============> ] 111.4MB/375MB 85dde7dceb0a Extracting [=> ] 1.671MB/63.48MB e73cb4a42719 Extracting [===========================> ] 59.05MB/109.1MB 55f2b468da67 Extracting [=============> ] 67.96MB/257.9MB 55f2b468da67 Extracting [=============> ] 68.52MB/257.9MB eabd8714fec9 Extracting [===============> ] 115.3MB/375MB bf70c5107ab5 Pull complete 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB e73cb4a42719 Extracting [===========================> ] 60.16MB/109.1MB 85dde7dceb0a Extracting [=> ] 2.228MB/63.48MB eabd8714fec9 Extracting [===============> ] 119.2MB/375MB 55f2b468da67 Extracting [==============> ] 75.2MB/257.9MB e73cb4a42719 Extracting [============================> ] 62.39MB/109.1MB 85dde7dceb0a Extracting [==> ] 2.785MB/63.48MB 55f2b468da67 Extracting [================> ] 84.67MB/257.9MB eabd8714fec9 Extracting [================> ] 123.7MB/375MB e73cb4a42719 Extracting [==============================> ] 66.85MB/109.1MB 1ccde423731d Pull complete 55f2b468da67 Extracting [==================> ] 93.59MB/257.9MB 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B e73cb4a42719 Extracting [===============================> ] 68.52MB/109.1MB 85dde7dceb0a Extracting [===> ] 4.456MB/63.48MB eabd8714fec9 Extracting [=================> ] 127.6MB/375MB 55f2b468da67 Extracting [===================> ] 101.9MB/257.9MB e73cb4a42719 Extracting [=================================> ] 72.97MB/109.1MB eabd8714fec9 Extracting [=================> ] 130.4MB/375MB 85dde7dceb0a Extracting [====> ] 5.571MB/63.48MB 55f2b468da67 Extracting [====================> ] 107MB/257.9MB e73cb4a42719 Extracting [==================================> ] 76.32MB/109.1MB eabd8714fec9 Extracting [=================> ] 133.7MB/375MB 85dde7dceb0a Extracting [======> ] 8.356MB/63.48MB 55f2b468da67 Extracting [=====================> ] 111.4MB/257.9MB eabd8714fec9 Extracting [==================> ] 135.9MB/375MB 7221d93db8a9 Pull complete e73cb4a42719 Extracting [====================================> ] 79.66MB/109.1MB 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B 85dde7dceb0a Extracting [=======> ] 9.47MB/63.48MB 55f2b468da67 Extracting [======================> ] 114.8MB/257.9MB eabd8714fec9 Extracting [==================> ] 139.3MB/375MB e73cb4a42719 Extracting [=====================================> ] 82.44MB/109.1MB 85dde7dceb0a Extracting [=========> ] 11.7MB/63.48MB 55f2b468da67 Extracting [=======================> ] 119.2MB/257.9MB eabd8714fec9 Extracting [===================> ] 142.6MB/375MB e73cb4a42719 Extracting [=======================================> ] 85.79MB/109.1MB 85dde7dceb0a Extracting [==========> ] 13.37MB/63.48MB 55f2b468da67 Extracting [========================> ] 124.2MB/257.9MB e73cb4a42719 Extracting [=========================================> ] 90.8MB/109.1MB 85dde7dceb0a Extracting [============> ] 15.6MB/63.48MB eabd8714fec9 Extracting [===================> ] 146.5MB/375MB 55f2b468da67 Extracting [========================> ] 128.7MB/257.9MB eabd8714fec9 Extracting [====================> ] 151.5MB/375MB 55f2b468da67 Extracting [==========================> ] 134.3MB/257.9MB e73cb4a42719 Extracting [==========================================> ] 92.47MB/109.1MB 85dde7dceb0a Extracting [=============> ] 16.71MB/63.48MB 55f2b468da67 Extracting [==========================> ] 137.6MB/257.9MB eabd8714fec9 Extracting [====================> ] 152.6MB/375MB e73cb4a42719 Extracting [===========================================> ] 94.14MB/109.1MB 85dde7dceb0a Extracting [==============> ] 18.38MB/63.48MB 55f2b468da67 Extracting [===========================> ] 140.9MB/257.9MB eabd8714fec9 Extracting [====================> ] 156MB/375MB e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 85dde7dceb0a Extracting [================> ] 21.17MB/63.48MB 7df673c7455d Pull complete 55f2b468da67 Extracting [============================> ] 145.4MB/257.9MB eabd8714fec9 Extracting [=====================> ] 159.3MB/375MB e73cb4a42719 Extracting [=============================================> ] 99.71MB/109.1MB 85dde7dceb0a Extracting [==================> ] 23.4MB/63.48MB 55f2b468da67 Extracting [============================> ] 149.3MB/257.9MB e73cb4a42719 Extracting [==============================================> ] 100.8MB/109.1MB 85dde7dceb0a Extracting [==================> ] 23.95MB/63.48MB eabd8714fec9 Extracting [=====================> ] 163.2MB/375MB 55f2b468da67 Extracting [=============================> ] 152.6MB/257.9MB 85dde7dceb0a Extracting [====================> ] 26.18MB/63.48MB e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB eabd8714fec9 Extracting [======================> ] 167.1MB/375MB 55f2b468da67 Extracting [==============================> ] 156.5MB/257.9MB 85dde7dceb0a Extracting [======================> ] 28.41MB/63.48MB eabd8714fec9 Extracting [=======================> ] 173.2MB/375MB e73cb4a42719 Extracting [================================================> ] 105.3MB/109.1MB 55f2b468da67 Extracting [==============================> ] 159.9MB/257.9MB 85dde7dceb0a Extracting [========================> ] 30.64MB/63.48MB eabd8714fec9 Extracting [========================> ] 183.8MB/375MB e73cb4a42719 Extracting [=================================================> ] 107MB/109.1MB 55f2b468da67 Extracting [===============================> ] 164.9MB/257.9MB eabd8714fec9 Extracting [=========================> ] 193.9MB/375MB 85dde7dceb0a Extracting [=========================> ] 32.31MB/63.48MB e73cb4a42719 Extracting [=================================================> ] 108.1MB/109.1MB 55f2b468da67 Extracting [================================> ] 169.3MB/257.9MB eabd8714fec9 Extracting [===========================> ] 202.8MB/375MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 85dde7dceb0a Extracting [===========================> ] 35.09MB/63.48MB 85dde7dceb0a Extracting [==============================> ] 38.44MB/63.48MB eabd8714fec9 Extracting [============================> ] 216.7MB/375MB 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 85dde7dceb0a Extracting [================================> ] 41.78MB/63.48MB eabd8714fec9 Extracting [=============================> ] 217.8MB/375MB 85dde7dceb0a Extracting [=================================> ] 42.34MB/63.48MB 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB prometheus Pulled e73cb4a42719 Pull complete a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB 85dde7dceb0a Extracting [===================================> ] 45.12MB/63.48MB eabd8714fec9 Extracting [=============================> ] 222.3MB/375MB eabd8714fec9 Extracting [==============================> ] 225.1MB/375MB a83b68436f09 Pull complete 85dde7dceb0a Extracting [======================================> ] 49.02MB/63.48MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B eabd8714fec9 Extracting [==============================> ] 228.4MB/375MB 85dde7dceb0a Extracting [========================================> ] 51.25MB/63.48MB 55f2b468da67 Extracting [==================================> ] 175.5MB/257.9MB eabd8714fec9 Extracting [===============================> ] 233.4MB/375MB 85dde7dceb0a Extracting [==========================================> ] 54.03MB/63.48MB 55f2b468da67 Extracting [==================================> ] 178.3MB/257.9MB eabd8714fec9 Extracting [===============================> ] 238.4MB/375MB 85dde7dceb0a Extracting [==============================================> ] 59.05MB/63.48MB 55f2b468da67 Extracting [===================================> ] 181.6MB/257.9MB eabd8714fec9 Extracting [================================> ] 242.9MB/375MB 85dde7dceb0a Extracting [==============================================> ] 59.6MB/63.48MB 55f2b468da67 Extracting [====================================> ] 186.6MB/257.9MB eabd8714fec9 Extracting [================================> ] 246.2MB/375MB 55f2b468da67 Extracting [=====================================> ] 191.6MB/257.9MB 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB eabd8714fec9 Extracting [=================================> ] 248.4MB/375MB 55f2b468da67 Extracting [=====================================> ] 192.2MB/257.9MB 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB eabd8714fec9 Extracting [=================================> ] 252.3MB/375MB 55f2b468da67 Extracting [=====================================> ] 195MB/257.9MB eabd8714fec9 Extracting [==================================> ] 255.7MB/375MB 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB eabd8714fec9 Extracting [==================================> ] 260.7MB/375MB 55f2b468da67 Extracting [======================================> ] 197.2MB/257.9MB eabd8714fec9 Extracting [===================================> ] 262.9MB/375MB 787d6bee9571 Pull complete eabd8714fec9 Extracting [===================================> ] 267.4MB/375MB 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB 85dde7dceb0a Pull complete 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB 13ff0988aaea Pull complete eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 55f2b468da67 Extracting [=======================================> ] 204.4MB/257.9MB eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 55f2b468da67 Extracting [========================================> ] 206.7MB/257.9MB eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB 7009d5001b77 Pull complete 4b82842ab819 Pull complete eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB eabd8714fec9 Extracting [====================================> ] 274.6MB/375MB 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B eabd8714fec9 Extracting [====================================> ] 276.9MB/375MB 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB 538deb30e80c Pull complete eabd8714fec9 Extracting [=====================================> ] 280.8MB/375MB 55f2b468da67 Extracting [=========================================> ] 212.2MB/257.9MB eabd8714fec9 Extracting [=====================================> ] 284.1MB/375MB 55f2b468da67 Extracting [=========================================> ] 213.9MB/257.9MB eabd8714fec9 Extracting [======================================> ] 286.3MB/375MB 55f2b468da67 Extracting [=========================================> ] 214.5MB/257.9MB eabd8714fec9 Extracting [======================================> ] 291.3MB/375MB 55f2b468da67 Extracting [==========================================> ] 217.3MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 294.1MB/375MB 55f2b468da67 Extracting [==========================================> ] 221.2MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 295.2MB/375MB 55f2b468da67 Extracting [===========================================> ] 222.8MB/257.9MB 55f2b468da67 Extracting [===========================================> ] 226.2MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB 7e568a0dc8fb Pull complete 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB eabd8714fec9 Extracting [========================================> ] 303MB/375MB grafana Pulled eabd8714fec9 Extracting [========================================> ] 303.6MB/375MB 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB 55f2b468da67 Extracting [=============================================> ] 235.6MB/257.9MB eabd8714fec9 Extracting [========================================> ] 305.3MB/375MB 55f2b468da67 Extracting [=============================================> ] 236.7MB/257.9MB eabd8714fec9 Extracting [========================================> ] 306.4MB/375MB 55f2b468da67 Extracting [==============================================> ] 241.2MB/257.9MB eabd8714fec9 Extracting [=========================================> ] 308.1MB/375MB eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB eabd8714fec9 Extracting [=========================================> ] 311.4MB/375MB 55f2b468da67 Extracting [================================================> ] 248.4MB/257.9MB eabd8714fec9 Extracting [=========================================> ] 312.5MB/375MB 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB eabd8714fec9 Extracting [=========================================> ] 314.2MB/375MB 55f2b468da67 Extracting [=================================================> ] 257.4MB/257.9MB eabd8714fec9 Extracting [==========================================> ] 317.5MB/375MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB eabd8714fec9 Extracting [===========================================> ] 326.4MB/375MB postgres Pulled eabd8714fec9 Extracting [===========================================> ] 327.5MB/375MB eabd8714fec9 Extracting [===========================================> ] 329.2MB/375MB eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB eabd8714fec9 Extracting [============================================> ] 333.1MB/375MB eabd8714fec9 Extracting [=============================================> ] 338.1MB/375MB eabd8714fec9 Extracting [=============================================> ] 340.4MB/375MB eabd8714fec9 Extracting [=============================================> ] 341.5MB/375MB 55f2b468da67 Pull complete eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 82bfc142787e Extracting [> ] 98.3kB/8.613MB 82bfc142787e Extracting [==> ] 491.5kB/8.613MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 82bfc142787e Pull complete 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB 46baca71a4ef Pull complete b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB b0e0ef7895f4 Extracting [======================> ] 16.52MB/37.01MB eabd8714fec9 Extracting [==============================================> ] 350.9MB/375MB b0e0ef7895f4 Extracting [===========================================> ] 32.24MB/37.01MB eabd8714fec9 Extracting [===============================================> ] 356.5MB/375MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB b0e0ef7895f4 Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B eabd8714fec9 Extracting [===============================================> ] 357.6MB/375MB eabd8714fec9 Extracting [================================================> ] 367.1MB/375MB 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B eabd8714fec9 Extracting [=================================================> ] 371MB/375MB 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B eabd8714fec9 Extracting [==================================================>] 375MB/375MB e040ea11fa10 Pull complete eabd8714fec9 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 09d5a3f70313 Extracting [=====> ] 11.7MB/109.2MB 45fd2fec8a19 Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 09d5a3f70313 Extracting [=========> ] 21.17MB/109.2MB 8f10199ed94b Extracting [==> ] 491.5kB/8.768MB 09d5a3f70313 Extracting [================> ] 36.21MB/109.2MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 09d5a3f70313 Extracting [======================> ] 50.14MB/109.2MB 09d5a3f70313 Extracting [==============================> ] 65.73MB/109.2MB f963a77d2726 Pull complete f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 09d5a3f70313 Extracting [=====================================> ] 81.33MB/109.2MB f3a82e9f1761 Extracting [=============> ] 12.39MB/44.41MB 09d5a3f70313 Extracting [===========================================> ] 94.7MB/109.2MB f3a82e9f1761 Extracting [========================> ] 21.56MB/44.41MB 09d5a3f70313 Extracting [===============================================> ] 104.7MB/109.2MB f3a82e9f1761 Extracting [======================================> ] 33.95MB/44.41MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 356f5c2c843b Pull complete kafka Pulled 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Pull complete da3ed5db7103 Extracting [> ] 557.1kB/127.4MB da3ed5db7103 Extracting [===> ] 9.47MB/127.4MB da3ed5db7103 Extracting [=========> ] 23.95MB/127.4MB da3ed5db7103 Extracting [===============> ] 40.67MB/127.4MB da3ed5db7103 Extracting [======================> ] 57.93MB/127.4MB da3ed5db7103 Extracting [==============================> ] 77.43MB/127.4MB da3ed5db7103 Extracting [======================================> ] 97.48MB/127.4MB da3ed5db7103 Extracting [=============================================> ] 115.9MB/127.4MB da3ed5db7103 Extracting [================================================> ] 122.6MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container zookeeper Creating Container postgres Creating Container prometheus Creating Container postgres Created Container policy-db-migrator Creating Container prometheus Created Container grafana Creating Container zookeeper Created Container kafka Creating Container policy-db-migrator Created Container policy-api Creating Container grafana Created Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-xacml-pdp Creating Container policy-xacml-pdp Created Container prometheus Starting Container zookeeper Starting Container postgres Starting Container zookeeper Started Container kafka Starting Container kafka Started Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-xacml-pdp Starting Container policy-xacml-pdp Started Container prometheus Started Container grafana Starting Container grafana Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 1 minute for xacml-pdp to start... Checking if REST port 30004 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute Cloning into '/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/csit/resources/tests/models'... Building robot framework docker image sha256:73ee7712b6b27ab9695fa2e2f0723d017617ab2017319013b97a361ab9984487 top - 18:34:07 up 4 min, 0 users, load average: 2.10, 1.54, 0.65 Tasks: 229 total, 1 running, 150 sleeping, 0 stopped, 0 zombie %Cpu(s): 15.7 us, 3.7 sy, 0.0 ni, 77.1 id, 3.3 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.5G 21G 27M 7.1G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 96216e75c4ff policy-xacml-pdp 1.57% 184.1MiB / 31.41GiB 0.57% 43.6kB / 53.5kB 0B / 4.1kB 51 b8f93e6eab87 policy-pap 0.95% 494MiB / 31.41GiB 1.54% 2.13MB / 1.06MB 0B / 139MB 67 a7a5160e4e7b policy-api 0.45% 407MiB / 31.41GiB 1.27% 1.14MB / 985kB 0B / 0B 59 225d468056aa kafka 4.06% 386.5MiB / 31.41GiB 1.20% 178kB / 169kB 0B / 582kB 83 fb0919c0ad8e grafana 0.27% 112.1MiB / 31.41GiB 0.35% 18.9MB / 94.1kB 0B / 30.9MB 21 4f294bc087a3 zookeeper 0.08% 94.23MiB / 31.41GiB 0.29% 54kB / 44.5kB 90.1kB / 451kB 62 db7ed2b810c1 postgres 0.01% 85.21MiB / 31.41GiB 0.26% 2.56MB / 3.74MB 0B / 160MB 26 7f5f312c6eb4 prometheus 0.00% 20.52MiB / 31.41GiB 0.06% 62.5kB / 3.44kB 0B / 0B 12 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | MakeTopics :: Creates the Policy topics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteXacmlPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | policy-csit | 4 tests, 4 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | policy-csit | 6 tests, 6 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-22T18:32:26.409529375Z level=info msg="Starting Grafana" version=12.0.2 commit=5bda17e7c1cb313eb96266f2fdda73a6b35c3977 branch=HEAD compiled=2025-06-22T18:32:26Z grafana | logger=settings t=2025-06-22T18:32:26.409825345Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-22T18:32:26.409837585Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-22T18:32:26.409841666Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-22T18:32:26.409844946Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-22T18:32:26.409849026Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-22T18:32:26.409852486Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-22T18:32:26.409855186Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-22T18:32:26.409858876Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-22T18:32:26.409863016Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-22T18:32:26.409866276Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-22T18:32:26.409869717Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-22T18:32:26.409874477Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-22T18:32:26.409880737Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-22T18:32:26.409884407Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-22T18:32:26.409888517Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-22T18:32:26.409891737Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-22T18:32:26.409894917Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-22T18:32:26.409898198Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-22T18:32:26.41023181Z level=info msg=FeatureToggles newFiltersUI=true pluginsDetailsRightPanel=true logsContextDatasourceUi=true logsInfiniteScrolling=true promQLScope=true angularDeprecationUI=true alertingRuleVersionHistoryRestore=true prometheusUsesCombobox=true alertingQueryAndExpressionsStepMode=true azureMonitorEnableUserAuth=true dashboardScene=true ssoSettingsApi=true dataplaneFrontendFallback=true logRowsPopoverMenu=true lokiQuerySplitting=true alertingInsights=true lokiStructuredMetadata=true alertingRulePermanentlyDelete=true influxdbBackendMigration=true addFieldFromCalculationStatFunctions=true ssoSettingsSAML=true logsExploreTableVisualisation=true correlations=true dashboardSceneSolo=true alertingNotificationsStepMode=true newPDFRendering=true alertingSimplifiedRouting=true kubernetesPlaylists=true preinstallAutoUpdate=true awsAsyncQueryCaching=true nestedFolders=true alertingUIOptimizeReducer=true lokiLabelNamesQueryApi=true cloudWatchRoundUpEndTime=true azureMonitorPrometheusExemplars=true recoveryThreshold=true cloudWatchNewLabelParsing=true grafanaconThemes=true panelMonitoring=true recordedQueriesMulti=true prometheusAzureOverrideAudience=true unifiedRequestLog=true annotationPermissionUpdate=true alertingApiServer=true logsPanelControls=true reportingUseRawTimeRange=true transformationsRedesign=true kubernetesClientDashboardsFolders=true cloudWatchCrossAccountQuerying=true dashboardSceneForViewers=true lokiQueryHints=true useSessionStorageForRedirection=true externalCorePlugins=true tlsMemcached=true dashgpt=true publicDashboardsScene=true failWrongDSUID=true pinNavItems=true formatString=true unifiedStorageSearchPermissionFiltering=true newDashboardSharingComponent=true groupToNestedTableTransformation=true onPremToCloudMigrations=true alertRuleRestore=true alertingRuleRecoverDeleted=true grafana | logger=sqlstore t=2025-06-22T18:32:26.410289782Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-22T18:32:26.410305773Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-22T18:32:26.41194866Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-22T18:32:26.411963141Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-22T18:32:26.412612103Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-22T18:32:26.413455964Z level=info msg="Migration successfully executed" id="create migration_log table" duration=843.551µs grafana | logger=migrator t=2025-06-22T18:32:26.416624396Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-22T18:32:26.417147504Z level=info msg="Migration successfully executed" id="create user table" duration=521.529µs grafana | logger=migrator t=2025-06-22T18:32:26.42409907Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-22T18:32:26.42496097Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=861.63µs grafana | logger=migrator t=2025-06-22T18:32:26.430218425Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-22T18:32:26.431578673Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.360598ms grafana | logger=migrator t=2025-06-22T18:32:26.435701459Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-22T18:32:26.436747525Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.045426ms grafana | logger=migrator t=2025-06-22T18:32:26.442581631Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-22T18:32:26.443692361Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.11034ms grafana | logger=migrator t=2025-06-22T18:32:26.44706619Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-22T18:32:26.449562408Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.495898ms grafana | logger=migrator t=2025-06-22T18:32:26.473255173Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-22T18:32:26.4745554Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.299507ms grafana | logger=migrator t=2025-06-22T18:32:26.477948389Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-22T18:32:26.479347168Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.397639ms grafana | logger=migrator t=2025-06-22T18:32:26.485000128Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-22T18:32:26.486215881Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.215353ms grafana | logger=migrator t=2025-06-22T18:32:26.490399788Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-22T18:32:26.490842064Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=441.806µs grafana | logger=migrator t=2025-06-22T18:32:26.494346668Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-22T18:32:26.494906958Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=559.55µs grafana | logger=migrator t=2025-06-22T18:32:26.501977317Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-22T18:32:26.503150289Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.168791ms grafana | logger=migrator t=2025-06-22T18:32:26.506847859Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-22T18:32:26.50688847Z level=info msg="Migration successfully executed" id="Update user table charset" duration=42.761µs grafana | logger=migrator t=2025-06-22T18:32:26.510604001Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-22T18:32:26.512342543Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.734011ms grafana | logger=migrator t=2025-06-22T18:32:26.53635919Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-22T18:32:26.537064655Z level=info msg="Migration successfully executed" id="Add missing user data" duration=704.645µs grafana | logger=migrator t=2025-06-22T18:32:26.543856684Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-22T18:32:26.545615497Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.759063ms grafana | logger=migrator t=2025-06-22T18:32:26.54910124Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-22T18:32:26.549793264Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=691.805µs grafana | logger=migrator t=2025-06-22T18:32:26.552960276Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-22T18:32:26.554445308Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.483442ms grafana | logger=migrator t=2025-06-22T18:32:26.557853338Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-22T18:32:26.567074994Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.221916ms grafana | logger=migrator t=2025-06-22T18:32:26.572703713Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-22T18:32:26.573832442Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.133539ms grafana | logger=migrator t=2025-06-22T18:32:26.577147629Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-22T18:32:26.577351096Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=203.597µs grafana | logger=migrator t=2025-06-22T18:32:26.579747001Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-22T18:32:26.580436885Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=689.294µs grafana | logger=migrator t=2025-06-22T18:32:26.585941049Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-22T18:32:26.587746913Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.805414ms grafana | logger=migrator t=2025-06-22T18:32:26.592600075Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-22T18:32:26.593091842Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=491.247µs grafana | logger=migrator t=2025-06-22T18:32:26.609051695Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-22T18:32:26.609822412Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=780.767µs grafana | logger=migrator t=2025-06-22T18:32:26.616891491Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-22T18:32:26.617246524Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=354.803µs grafana | logger=migrator t=2025-06-22T18:32:26.619833605Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-22T18:32:26.620371635Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=538.01µs grafana | logger=migrator t=2025-06-22T18:32:26.624535081Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-22T18:32:26.625819447Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.283866ms grafana | logger=migrator t=2025-06-22T18:32:26.629437204Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-22T18:32:26.630136569Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=699.295µs grafana | logger=migrator t=2025-06-22T18:32:26.635285961Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-22T18:32:26.635960004Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=685.684µs grafana | logger=migrator t=2025-06-22T18:32:26.639134436Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-22T18:32:26.640338739Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.203503ms grafana | logger=migrator t=2025-06-22T18:32:26.643937106Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-22T18:32:26.645042304Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.104508ms grafana | logger=migrator t=2025-06-22T18:32:26.65030061Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-22T18:32:26.650330111Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=29.931µs grafana | logger=migrator t=2025-06-22T18:32:26.655018547Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-22T18:32:26.655801104Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=781.217µs grafana | logger=migrator t=2025-06-22T18:32:26.659709133Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-22T18:32:26.660737719Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.028306ms grafana | logger=migrator t=2025-06-22T18:32:26.665354732Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-22T18:32:26.666000734Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=644.802µs grafana | logger=migrator t=2025-06-22T18:32:26.669369613Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-22T18:32:26.669999415Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=629.182µs grafana | logger=migrator t=2025-06-22T18:32:26.675040653Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-22T18:32:26.679962867Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.921454ms grafana | logger=migrator t=2025-06-22T18:32:26.684092642Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-22T18:32:26.684924892Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=830.28µs grafana | logger=migrator t=2025-06-22T18:32:26.689259635Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-22T18:32:26.690393475Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.13268ms grafana | logger=migrator t=2025-06-22T18:32:26.694657335Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-22T18:32:26.695829707Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.171602ms grafana | logger=migrator t=2025-06-22T18:32:26.700088187Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-22T18:32:26.700815662Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=726.775µs grafana | logger=migrator t=2025-06-22T18:32:26.705658964Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-22T18:32:26.706396429Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=736.775µs grafana | logger=migrator t=2025-06-22T18:32:26.710527536Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-22T18:32:26.711118566Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=590.72µs grafana | logger=migrator t=2025-06-22T18:32:26.716288929Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-22T18:32:26.717081047Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=791.437µs grafana | logger=migrator t=2025-06-22T18:32:26.721355498Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-22T18:32:26.721898337Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=545.959µs grafana | logger=migrator t=2025-06-22T18:32:26.725707501Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-22T18:32:26.726344173Z level=info msg="Migration successfully executed" id="create star table" duration=636.382µs grafana | logger=migrator t=2025-06-22T18:32:26.729652961Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-22T18:32:26.730347955Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=694.654µs grafana | logger=migrator t=2025-06-22T18:32:26.770423599Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-22T18:32:26.773376283Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=2.907953ms grafana | logger=migrator t=2025-06-22T18:32:26.776813914Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-22T18:32:26.778223894Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.40963ms grafana | logger=migrator t=2025-06-22T18:32:26.781352284Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-22T18:32:26.782727203Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.372049ms grafana | logger=migrator t=2025-06-22T18:32:26.785779991Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-22T18:32:26.786650851Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=870.47µs grafana | logger=migrator t=2025-06-22T18:32:26.792190347Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-22T18:32:26.792954764Z level=info msg="Migration successfully executed" id="create org table v1" duration=764.357µs grafana | logger=migrator t=2025-06-22T18:32:26.796233689Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-22T18:32:26.797015867Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=781.738µs grafana | logger=migrator t=2025-06-22T18:32:26.80020964Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-22T18:32:26.801002898Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=792.518µs grafana | logger=migrator t=2025-06-22T18:32:26.804923596Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-22T18:32:26.805830998Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=907.212µs grafana | logger=migrator t=2025-06-22T18:32:26.811732466Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-22T18:32:26.813110785Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.377839ms grafana | logger=migrator t=2025-06-22T18:32:26.818262807Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-22T18:32:26.819422638Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.156301ms grafana | logger=migrator t=2025-06-22T18:32:26.824257108Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-22T18:32:26.82433019Z level=info msg="Migration successfully executed" id="Update org table charset" duration=73.742µs grafana | logger=migrator t=2025-06-22T18:32:26.828223398Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-22T18:32:26.828294181Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=71.643µs grafana | logger=migrator t=2025-06-22T18:32:26.83366271Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-22T18:32:26.833873778Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=211.178µs grafana | logger=migrator t=2025-06-22T18:32:26.845746967Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-22T18:32:26.846642058Z level=info msg="Migration successfully executed" id="create dashboard table" duration=895.572µs grafana | logger=migrator t=2025-06-22T18:32:26.8503672Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-22T18:32:26.851371135Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.006735ms grafana | logger=migrator t=2025-06-22T18:32:26.854080711Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-22T18:32:26.854890799Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=809.098µs grafana | logger=migrator t=2025-06-22T18:32:26.859640577Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-22T18:32:26.860434284Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=793.107µs grafana | logger=migrator t=2025-06-22T18:32:26.864376324Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-22T18:32:26.865277636Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=900.212µs grafana | logger=migrator t=2025-06-22T18:32:26.868664145Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-22T18:32:26.869542547Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=877.742µs grafana | logger=migrator t=2025-06-22T18:32:26.875934172Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-22T18:32:26.883210458Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=7.276746ms grafana | logger=migrator t=2025-06-22T18:32:26.886783865Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-22T18:32:26.887751959Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=967.064µs grafana | logger=migrator t=2025-06-22T18:32:26.890806156Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-22T18:32:26.891855903Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.052027ms grafana | logger=migrator t=2025-06-22T18:32:26.926852678Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-22T18:32:26.928353662Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.500744ms grafana | logger=migrator t=2025-06-22T18:32:26.932356093Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-22T18:32:26.932880371Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=523.459µs grafana | logger=migrator t=2025-06-22T18:32:26.935923678Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-22T18:32:26.936879563Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=956.144µs grafana | logger=migrator t=2025-06-22T18:32:26.942923886Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-22T18:32:26.942958007Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=35.402µs grafana | logger=migrator t=2025-06-22T18:32:26.948625277Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-22T18:32:26.952678729Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=4.045942ms grafana | logger=migrator t=2025-06-22T18:32:26.956334929Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-22T18:32:26.958275837Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.943188ms grafana | logger=migrator t=2025-06-22T18:32:26.965902426Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-22T18:32:26.968435116Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.53094ms grafana | logger=migrator t=2025-06-22T18:32:26.97254331Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-22T18:32:26.9733756Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=831.58µs grafana | logger=migrator t=2025-06-22T18:32:26.977172764Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-22T18:32:26.979804707Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.631243ms grafana | logger=migrator t=2025-06-22T18:32:26.985736036Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-22T18:32:26.986690369Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=953.723µs grafana | logger=migrator t=2025-06-22T18:32:26.989875442Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-22T18:32:26.990794325Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=918.553µs grafana | logger=migrator t=2025-06-22T18:32:26.994219165Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-22T18:32:26.994253906Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=31.831µs grafana | logger=migrator t=2025-06-22T18:32:27.000776436Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-22T18:32:27.000809927Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=31.891µs grafana | logger=migrator t=2025-06-22T18:32:27.004944654Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-22T18:32:27.008668842Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.723308ms grafana | logger=migrator t=2025-06-22T18:32:27.012579696Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-22T18:32:27.01530194Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.722914ms grafana | logger=migrator t=2025-06-22T18:32:27.020351757Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-22T18:32:27.022596138Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.243751ms grafana | logger=migrator t=2025-06-22T18:32:27.0260164Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-22T18:32:27.029337491Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.319791ms grafana | logger=migrator t=2025-06-22T18:32:27.032828773Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-22T18:32:27.033145636Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=316.743µs grafana | logger=migrator t=2025-06-22T18:32:27.035946942Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-22T18:32:27.036736069Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=789.207µs grafana | logger=migrator t=2025-06-22T18:32:27.041713606Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-22T18:32:27.042421542Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=707.316µs grafana | logger=migrator t=2025-06-22T18:32:27.068717567Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-22T18:32:27.068765198Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=43.761µs grafana | logger=migrator t=2025-06-22T18:32:27.072583603Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-22T18:32:27.074094467Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.510064ms grafana | logger=migrator t=2025-06-22T18:32:27.080520097Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-22T18:32:27.081242033Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=720.886µs grafana | logger=migrator t=2025-06-22T18:32:27.084695265Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-22T18:32:27.090454569Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.755244ms grafana | logger=migrator t=2025-06-22T18:32:27.095256594Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-22T18:32:27.096147752Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=891.688µs grafana | logger=migrator t=2025-06-22T18:32:27.101419711Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-22T18:32:27.10230307Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=880.189µs grafana | logger=migrator t=2025-06-22T18:32:27.105823982Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-22T18:32:27.106890782Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.06599ms grafana | logger=migrator t=2025-06-22T18:32:27.110452455Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-22T18:32:27.110899629Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=446.684µs grafana | logger=migrator t=2025-06-22T18:32:27.118305198Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-22T18:32:27.119328438Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.02258ms grafana | logger=migrator t=2025-06-22T18:32:27.123126563Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-22T18:32:27.126892777Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.765334ms grafana | logger=migrator t=2025-06-22T18:32:27.131185168Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-22T18:32:27.132108646Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=923.138µs grafana | logger=migrator t=2025-06-22T18:32:27.138004721Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-22T18:32:27.138328444Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=323.373µs grafana | logger=migrator t=2025-06-22T18:32:27.142768225Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-22T18:32:27.143142619Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=373.794µs grafana | logger=migrator t=2025-06-22T18:32:27.150955722Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-22T18:32:27.151664068Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=708.796µs grafana | logger=migrator t=2025-06-22T18:32:27.154812067Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-22T18:32:27.157100688Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.285771ms grafana | logger=migrator t=2025-06-22T18:32:27.16362341Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-22T18:32:27.167197503Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=3.573133ms grafana | logger=migrator t=2025-06-22T18:32:27.171480542Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-22T18:32:27.172336871Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=855.989µs grafana | logger=migrator t=2025-06-22T18:32:27.175721302Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-22T18:32:27.178990862Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=3.26806ms grafana | logger=migrator t=2025-06-22T18:32:27.186168569Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-22T18:32:27.188751852Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.582633ms grafana | logger=migrator t=2025-06-22T18:32:27.215200209Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-22T18:32:27.216004906Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=804.607µs grafana | logger=migrator t=2025-06-22T18:32:27.219951493Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-22T18:32:27.223576667Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=3.624614ms grafana | logger=migrator t=2025-06-22T18:32:27.227104609Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-22T18:32:27.227906847Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=801.998µs grafana | logger=migrator t=2025-06-22T18:32:27.233060655Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-22T18:32:27.233489778Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=427.863µs grafana | logger=migrator t=2025-06-22T18:32:27.237135683Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-22T18:32:27.238508735Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.372782ms grafana | logger=migrator t=2025-06-22T18:32:27.243123019Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-22T18:32:27.244008697Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=884.337µs grafana | logger=migrator t=2025-06-22T18:32:27.249641489Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-22T18:32:27.250863751Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.221952ms grafana | logger=migrator t=2025-06-22T18:32:27.254745436Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-22T18:32:27.255929637Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.184181ms grafana | logger=migrator t=2025-06-22T18:32:27.260045456Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-22T18:32:27.261243967Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.198701ms grafana | logger=migrator t=2025-06-22T18:32:27.26797776Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-22T18:32:27.275542149Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.564239ms grafana | logger=migrator t=2025-06-22T18:32:27.279670358Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-22T18:32:27.280501366Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=830.348µs grafana | logger=migrator t=2025-06-22T18:32:27.28633029Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-22T18:32:27.287743294Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.411563ms grafana | logger=migrator t=2025-06-22T18:32:27.292169334Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-22T18:32:27.293469937Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.299713ms grafana | logger=migrator t=2025-06-22T18:32:27.300010077Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-22T18:32:27.300407791Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=397.474µs grafana | logger=migrator t=2025-06-22T18:32:27.308368205Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-22T18:32:27.312367093Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.997788ms grafana | logger=migrator t=2025-06-22T18:32:27.316270359Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-22T18:32:27.319544469Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=3.27484ms grafana | logger=migrator t=2025-06-22T18:32:27.322590997Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-22T18:32:27.322609247Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=18.34µs grafana | logger=migrator t=2025-06-22T18:32:27.328781784Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-22T18:32:27.329113758Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=332.444µs grafana | logger=migrator t=2025-06-22T18:32:27.332778332Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-22T18:32:27.336372285Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.594493ms grafana | logger=migrator t=2025-06-22T18:32:27.360332019Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-22T18:32:27.360849353Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=518.065µs grafana | logger=migrator t=2025-06-22T18:32:27.366188693Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-22T18:32:27.366473475Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=285.182µs grafana | logger=migrator t=2025-06-22T18:32:27.373004656Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-22T18:32:27.376979853Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.974727ms grafana | logger=migrator t=2025-06-22T18:32:27.380622667Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-22T18:32:27.380792638Z level=info msg="Migration successfully executed" id="Update uid value" duration=172.081µs grafana | logger=migrator t=2025-06-22T18:32:27.384514313Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-22T18:32:27.385277371Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=762.488µs grafana | logger=migrator t=2025-06-22T18:32:27.391506528Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-22T18:32:27.392297175Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=790.097µs grafana | logger=migrator t=2025-06-22T18:32:27.395337823Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-22T18:32:27.397767146Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.428863ms grafana | logger=migrator t=2025-06-22T18:32:27.40138317Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-22T18:32:27.405314767Z level=info msg="Migration successfully executed" id="Add api_version column" duration=3.934408ms grafana | logger=migrator t=2025-06-22T18:32:27.41319443Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-22T18:32:27.41322314Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=30µs grafana | logger=migrator t=2025-06-22T18:32:27.417209717Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-22T18:32:27.418452898Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.242951ms grafana | logger=migrator t=2025-06-22T18:32:27.421934861Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-22T18:32:27.422687738Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=752.727µs grafana | logger=migrator t=2025-06-22T18:32:27.425915658Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-22T18:32:27.426650074Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=734.226µs grafana | logger=migrator t=2025-06-22T18:32:27.433060605Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-22T18:32:27.434264006Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.20243ms grafana | logger=migrator t=2025-06-22T18:32:27.438252613Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-22T18:32:27.439963778Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.710365ms grafana | logger=migrator t=2025-06-22T18:32:27.443449631Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-22T18:32:27.444721273Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.270622ms grafana | logger=migrator t=2025-06-22T18:32:27.451059462Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-22T18:32:27.451784428Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=724.376µs grafana | logger=migrator t=2025-06-22T18:32:27.455542394Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-22T18:32:27.465883109Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=10.340755ms grafana | logger=migrator t=2025-06-22T18:32:27.471703783Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-22T18:32:27.47232195Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=617.577µs grafana | logger=migrator t=2025-06-22T18:32:27.476164225Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-22T18:32:27.476940042Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=775.657µs grafana | logger=migrator t=2025-06-22T18:32:27.499104078Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-22T18:32:27.500719124Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.616696ms grafana | logger=migrator t=2025-06-22T18:32:27.508549057Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-22T18:32:27.509732247Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.17425ms grafana | logger=migrator t=2025-06-22T18:32:27.514119348Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-22T18:32:27.514424931Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=305.583µs grafana | logger=migrator t=2025-06-22T18:32:27.517967764Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-22T18:32:27.518779551Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=811.177µs grafana | logger=migrator t=2025-06-22T18:32:27.526515543Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-22T18:32:27.526552444Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=37.531µs grafana | logger=migrator t=2025-06-22T18:32:27.531144067Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-22T18:32:27.535008332Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.865205ms grafana | logger=migrator t=2025-06-22T18:32:27.538522275Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-22T18:32:27.540325172Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.802037ms grafana | logger=migrator t=2025-06-22T18:32:27.54549736Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-22T18:32:27.545676482Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=179.332µs grafana | logger=migrator t=2025-06-22T18:32:27.550024361Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-22T18:32:27.553965229Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.939648ms grafana | logger=migrator t=2025-06-22T18:32:27.557727863Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-22T18:32:27.562862051Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=5.133648ms grafana | logger=migrator t=2025-06-22T18:32:27.565903399Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-22T18:32:27.566403554Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=499.805µs grafana | logger=migrator t=2025-06-22T18:32:27.574078606Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-22T18:32:27.575273506Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=1.19396ms grafana | logger=migrator t=2025-06-22T18:32:27.580353074Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-22T18:32:27.581571146Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.217601ms grafana | logger=migrator t=2025-06-22T18:32:27.587331629Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-22T18:32:27.588971804Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.639745ms grafana | logger=migrator t=2025-06-22T18:32:27.593462806Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-22T18:32:27.594673977Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.210971ms grafana | logger=migrator t=2025-06-22T18:32:27.598450502Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-22T18:32:27.599188639Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=737.747µs grafana | logger=migrator t=2025-06-22T18:32:27.602638901Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-22T18:32:27.602653331Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=14.86µs grafana | logger=migrator t=2025-06-22T18:32:27.609593026Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-22T18:32:27.609633346Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=39.11µs grafana | logger=migrator t=2025-06-22T18:32:27.613369141Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-22T18:32:27.619165245Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=5.791424ms grafana | logger=migrator t=2025-06-22T18:32:27.622400525Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-22T18:32:27.625273591Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.872676ms grafana | logger=migrator t=2025-06-22T18:32:27.642017827Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-22T18:32:27.642096118Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=31.39µs grafana | logger=migrator t=2025-06-22T18:32:27.646137956Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-22T18:32:27.647391757Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.254891ms grafana | logger=migrator t=2025-06-22T18:32:27.651095962Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-22T18:32:27.652125911Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.030429ms grafana | logger=migrator t=2025-06-22T18:32:27.655450402Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-22T18:32:27.655474042Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=24.18µs grafana | logger=migrator t=2025-06-22T18:32:27.661590839Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-22T18:32:27.662834371Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.243482ms grafana | logger=migrator t=2025-06-22T18:32:27.666599966Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-22T18:32:27.668458783Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.858207ms grafana | logger=migrator t=2025-06-22T18:32:27.672193858Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-22T18:32:27.676850161Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.655443ms grafana | logger=migrator t=2025-06-22T18:32:27.680924219Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-22T18:32:27.680942879Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=18.86µs grafana | logger=migrator t=2025-06-22T18:32:27.68322915Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-22T18:32:27.683454262Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=234.412µs grafana | logger=migrator t=2025-06-22T18:32:27.687495421Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-22T18:32:27.69825601Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=10.758549ms grafana | logger=migrator t=2025-06-22T18:32:27.704749721Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-22T18:32:27.705309036Z level=info msg="Migration successfully executed" id="create session table" duration=558.435µs grafana | logger=migrator t=2025-06-22T18:32:27.708538426Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-22T18:32:27.708636097Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=97.341µs grafana | logger=migrator t=2025-06-22T18:32:27.712065608Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-22T18:32:27.712175939Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=110.791µs grafana | logger=migrator t=2025-06-22T18:32:27.717334167Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-22T18:32:27.718864992Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.529975ms grafana | logger=migrator t=2025-06-22T18:32:27.724089241Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-22T18:32:27.724990479Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=901.718µs grafana | logger=migrator t=2025-06-22T18:32:27.728620432Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-22T18:32:27.728644842Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=24.45µs grafana | logger=migrator t=2025-06-22T18:32:27.731903283Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-22T18:32:27.731929083Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=26.02µs grafana | logger=migrator t=2025-06-22T18:32:27.736703818Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-22T18:32:27.741811825Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.108088ms grafana | logger=migrator t=2025-06-22T18:32:27.746239796Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-22T18:32:27.750277654Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=4.037138ms grafana | logger=migrator t=2025-06-22T18:32:27.753781936Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-22T18:32:27.753861857Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=79.451µs grafana | logger=migrator t=2025-06-22T18:32:27.787763832Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-22T18:32:27.787949694Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=185.992µs grafana | logger=migrator t=2025-06-22T18:32:27.793377254Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-22T18:32:27.794743307Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.365723ms grafana | logger=migrator t=2025-06-22T18:32:27.798520022Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-22T18:32:27.798544762Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=24.8µs grafana | logger=migrator t=2025-06-22T18:32:27.801829853Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-22T18:32:27.805073883Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.24359ms grafana | logger=migrator t=2025-06-22T18:32:27.810631714Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-22T18:32:27.810799527Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=168.373µs grafana | logger=migrator t=2025-06-22T18:32:27.815647381Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-22T18:32:27.822513836Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=6.865695ms grafana | logger=migrator t=2025-06-22T18:32:27.826143799Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-22T18:32:27.829283248Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.138999ms grafana | logger=migrator t=2025-06-22T18:32:27.833521588Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-22T18:32:27.833540098Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=18.17µs grafana | logger=migrator t=2025-06-22T18:32:27.837971969Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-22T18:32:27.839761876Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.788367ms grafana | logger=migrator t=2025-06-22T18:32:27.844911014Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-22T18:32:27.846564179Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.653075ms grafana | logger=migrator t=2025-06-22T18:32:27.852319852Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-22T18:32:27.854075159Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.754907ms grafana | logger=migrator t=2025-06-22T18:32:27.85747739Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-22T18:32:27.858854394Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.377464ms grafana | logger=migrator t=2025-06-22T18:32:27.861995343Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-22T18:32:27.862888221Z level=info msg="Migration successfully executed" id="add index alert state" duration=892.748µs grafana | logger=migrator t=2025-06-22T18:32:27.867665075Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-22T18:32:27.868604844Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=939.649µs grafana | logger=migrator t=2025-06-22T18:32:27.874000084Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-22T18:32:27.874831401Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=830.587µs grafana | logger=migrator t=2025-06-22T18:32:27.87888251Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-22T18:32:27.880394313Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.505493ms grafana | logger=migrator t=2025-06-22T18:32:27.884081428Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-22T18:32:27.884979936Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=897.898µs grafana | logger=migrator t=2025-06-22T18:32:27.889831152Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-22T18:32:27.90151648Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=11.684038ms grafana | logger=migrator t=2025-06-22T18:32:27.904951412Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-22T18:32:27.905620978Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=668.946µs grafana | logger=migrator t=2025-06-22T18:32:27.935857729Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-22T18:32:27.937391794Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.539105ms grafana | logger=migrator t=2025-06-22T18:32:27.944703391Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-22T18:32:27.945098365Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=393.864µs grafana | logger=migrator t=2025-06-22T18:32:27.948370666Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-22T18:32:27.948985281Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=613.725µs grafana | logger=migrator t=2025-06-22T18:32:27.952124691Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-22T18:32:27.952997188Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=871.508µs grafana | logger=migrator t=2025-06-22T18:32:27.957565911Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-22T18:32:27.963157783Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.589732ms grafana | logger=migrator t=2025-06-22T18:32:27.966721896Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-22T18:32:27.970432621Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.708945ms grafana | logger=migrator t=2025-06-22T18:32:27.973769352Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-22T18:32:27.977475566Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.705354ms grafana | logger=migrator t=2025-06-22T18:32:27.982694335Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-22T18:32:27.986341588Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.647723ms grafana | logger=migrator t=2025-06-22T18:32:27.989558088Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-22T18:32:27.991083143Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.524495ms grafana | logger=migrator t=2025-06-22T18:32:27.997023048Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-22T18:32:27.997093129Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=74.581µs grafana | logger=migrator t=2025-06-22T18:32:28.000727182Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-22T18:32:28.000771422Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=45.67µs grafana | logger=migrator t=2025-06-22T18:32:28.00639366Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-22T18:32:28.007811855Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.417915ms grafana | logger=migrator t=2025-06-22T18:32:28.012447442Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-22T18:32:28.013973548Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.521835ms grafana | logger=migrator t=2025-06-22T18:32:28.018781612Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-22T18:32:28.01982982Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.049068ms grafana | logger=migrator t=2025-06-22T18:32:28.026035955Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-22T18:32:28.026768622Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=732.597µs grafana | logger=migrator t=2025-06-22T18:32:28.030612881Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-22T18:32:28.031512523Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=899.352µs grafana | logger=migrator t=2025-06-22T18:32:28.037682518Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-22T18:32:28.044247715Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=6.564747ms grafana | logger=migrator t=2025-06-22T18:32:28.047764612Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-22T18:32:28.051528698Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.766126ms grafana | logger=migrator t=2025-06-22T18:32:28.055091768Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-22T18:32:28.055287135Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=192.657µs grafana | logger=migrator t=2025-06-22T18:32:28.079549164Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-22T18:32:28.080971926Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.422232ms grafana | logger=migrator t=2025-06-22T18:32:28.087983019Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-22T18:32:28.089221845Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.238676ms grafana | logger=migrator t=2025-06-22T18:32:28.095122949Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-22T18:32:28.100327867Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.205218ms grafana | logger=migrator t=2025-06-22T18:32:28.105788635Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-22T18:32:28.105806396Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=18.101µs grafana | logger=migrator t=2025-06-22T18:32:28.11086919Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-22T18:32:28.111746381Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=883.082µs grafana | logger=migrator t=2025-06-22T18:32:28.114832573Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-22T18:32:28.116213533Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.38034ms grafana | logger=migrator t=2025-06-22T18:32:28.119751561Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-22T18:32:28.119908087Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=156.556µs grafana | logger=migrator t=2025-06-22T18:32:28.127474851Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-22T18:32:28.128464547Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=989.936µs grafana | logger=migrator t=2025-06-22T18:32:28.133281282Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-22T18:32:28.13461773Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.335718ms grafana | logger=migrator t=2025-06-22T18:32:28.137722072Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-22T18:32:28.139327201Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.604259ms grafana | logger=migrator t=2025-06-22T18:32:28.144377864Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-22T18:32:28.145188183Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=810.089µs grafana | logger=migrator t=2025-06-22T18:32:28.148639268Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-22T18:32:28.14953096Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=891.162µs grafana | logger=migrator t=2025-06-22T18:32:28.153826506Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-22T18:32:28.155109742Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.285646ms grafana | logger=migrator t=2025-06-22T18:32:28.160943254Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-22T18:32:28.161017387Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=74.243µs grafana | logger=migrator t=2025-06-22T18:32:28.16362383Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-22T18:32:28.167941677Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.316757ms grafana | logger=migrator t=2025-06-22T18:32:28.171788376Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-22T18:32:28.17268584Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=894.183µs grafana | logger=migrator t=2025-06-22T18:32:28.178031903Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-22T18:32:28.182109521Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.077288ms grafana | logger=migrator t=2025-06-22T18:32:28.185844866Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-22T18:32:28.186572773Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=727.517µs grafana | logger=migrator t=2025-06-22T18:32:28.19035414Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-22T18:32:28.191347545Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=989.095µs grafana | logger=migrator t=2025-06-22T18:32:28.196773232Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-22T18:32:28.198146362Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.37353ms grafana | logger=migrator t=2025-06-22T18:32:28.234700357Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-22T18:32:28.250117345Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=15.417588ms grafana | logger=migrator t=2025-06-22T18:32:28.253613952Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-22T18:32:28.25437312Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=758.328µs grafana | logger=migrator t=2025-06-22T18:32:28.259373351Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-22T18:32:28.260392668Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.018757ms grafana | logger=migrator t=2025-06-22T18:32:28.265377398Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-22T18:32:28.265868676Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=487.008µs grafana | logger=migrator t=2025-06-22T18:32:28.269176236Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-22T18:32:28.26983834Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=660.633µs grafana | logger=migrator t=2025-06-22T18:32:28.278468942Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-22T18:32:28.27894738Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=476.958µs grafana | logger=migrator t=2025-06-22T18:32:28.282761438Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-22T18:32:28.288186905Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.425667ms grafana | logger=migrator t=2025-06-22T18:32:28.293207587Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-22T18:32:28.29743818Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.229913ms grafana | logger=migrator t=2025-06-22T18:32:28.301674724Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-22T18:32:28.302620738Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=948.104µs grafana | logger=migrator t=2025-06-22T18:32:28.308263222Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-22T18:32:28.30930681Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.046658ms grafana | logger=migrator t=2025-06-22T18:32:28.315447533Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-22T18:32:28.315809006Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=361.253µs grafana | logger=migrator t=2025-06-22T18:32:28.319923325Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-22T18:32:28.324454119Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.530384ms grafana | logger=migrator t=2025-06-22T18:32:28.327823491Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-22T18:32:28.328758275Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=934.844µs grafana | logger=migrator t=2025-06-22T18:32:28.332047055Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-22T18:32:28.332292553Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=245.238µs grafana | logger=migrator t=2025-06-22T18:32:28.337246512Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-22T18:32:28.337831334Z level=info msg="Migration successfully executed" id="Move region to single row" duration=584.512µs grafana | logger=migrator t=2025-06-22T18:32:28.341949393Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-22T18:32:28.343420037Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.470334ms grafana | logger=migrator t=2025-06-22T18:32:28.347351999Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-22T18:32:28.348354315Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.001716ms grafana | logger=migrator t=2025-06-22T18:32:28.353236862Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-22T18:32:28.354237099Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=999.587µs grafana | logger=migrator t=2025-06-22T18:32:28.388238881Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-22T18:32:28.389392752Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.157241ms grafana | logger=migrator t=2025-06-22T18:32:28.393276643Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-22T18:32:28.394162565Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=885.652µs grafana | logger=migrator t=2025-06-22T18:32:28.397839388Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-22T18:32:28.39873561Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=896.452µs grafana | logger=migrator t=2025-06-22T18:32:28.403729802Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-22T18:32:28.403756443Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=27.591µs grafana | logger=migrator t=2025-06-22T18:32:28.407627794Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-22T18:32:28.407654364Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=27.281µs grafana | logger=migrator t=2025-06-22T18:32:28.412668095Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-22T18:32:28.412705237Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=38.392µs grafana | logger=migrator t=2025-06-22T18:32:28.420167107Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-22T18:32:28.421241846Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.077629ms grafana | logger=migrator t=2025-06-22T18:32:28.425025394Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-22T18:32:28.425824092Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=798.628µs grafana | logger=migrator t=2025-06-22T18:32:28.430732351Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-22T18:32:28.431633793Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=900.333µs grafana | logger=migrator t=2025-06-22T18:32:28.434929102Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-22T18:32:28.436020552Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.0898ms grafana | logger=migrator t=2025-06-22T18:32:28.439861341Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-22T18:32:28.440162072Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=300.641µs grafana | logger=migrator t=2025-06-22T18:32:28.446250462Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-22T18:32:28.446607465Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=357.223µs grafana | logger=migrator t=2025-06-22T18:32:28.451737511Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-22T18:32:28.451778883Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=40.042µs grafana | logger=migrator t=2025-06-22T18:32:28.45638506Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-22T18:32:28.462217781Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=5.833091ms grafana | logger=migrator t=2025-06-22T18:32:28.465916555Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-22T18:32:28.466766066Z level=info msg="Migration successfully executed" id="create team table" duration=848.601µs grafana | logger=migrator t=2025-06-22T18:32:28.473582783Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-22T18:32:28.474646242Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.062179ms grafana | logger=migrator t=2025-06-22T18:32:28.478927347Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-22T18:32:28.480556886Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.625599ms grafana | logger=migrator t=2025-06-22T18:32:28.486411168Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-22T18:32:28.49308916Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=6.678992ms grafana | logger=migrator t=2025-06-22T18:32:28.497922895Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-22T18:32:28.498519877Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=596.532µs grafana | logger=migrator t=2025-06-22T18:32:28.50221354Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-22T18:32:28.503060122Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=846.552µs grafana | logger=migrator t=2025-06-22T18:32:28.50633333Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-22T18:32:28.51156132Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=5.22601ms grafana | logger=migrator t=2025-06-22T18:32:28.534806392Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-22T18:32:28.544251054Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=9.446621ms grafana | logger=migrator t=2025-06-22T18:32:28.547819134Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-22T18:32:28.548629483Z level=info msg="Migration successfully executed" id="create team member table" duration=809.349µs grafana | logger=migrator t=2025-06-22T18:32:28.551837489Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-22T18:32:28.552831745Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=993.565µs grafana | logger=migrator t=2025-06-22T18:32:28.55820967Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-22T18:32:28.55930867Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.098121ms grafana | logger=migrator t=2025-06-22T18:32:28.565087879Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-22T18:32:28.566277562Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.192273ms grafana | logger=migrator t=2025-06-22T18:32:28.571280594Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-22T18:32:28.576162691Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.881708ms grafana | logger=migrator t=2025-06-22T18:32:28.580368823Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-22T18:32:28.58499204Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.622477ms grafana | logger=migrator t=2025-06-22T18:32:28.590558502Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-22T18:32:28.595286874Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.727462ms grafana | logger=migrator t=2025-06-22T18:32:28.598643875Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-22T18:32:28.599516367Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=872.352µs grafana | logger=migrator t=2025-06-22T18:32:28.604363193Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-22T18:32:28.60567659Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.312208ms grafana | logger=migrator t=2025-06-22T18:32:28.609129886Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-22T18:32:28.610495635Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.368389ms grafana | logger=migrator t=2025-06-22T18:32:28.61450806Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-22T18:32:28.616120078Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.611299ms grafana | logger=migrator t=2025-06-22T18:32:28.622859642Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-22T18:32:28.623757425Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=897.063µs grafana | logger=migrator t=2025-06-22T18:32:28.628553658Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-22T18:32:28.629471902Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=917.304µs grafana | logger=migrator t=2025-06-22T18:32:28.632625317Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-22T18:32:28.634064938Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.438972ms grafana | logger=migrator t=2025-06-22T18:32:28.641513058Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-22T18:32:28.642464553Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=951.175µs grafana | logger=migrator t=2025-06-22T18:32:28.646167267Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-22T18:32:28.647671631Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.503244ms grafana | logger=migrator t=2025-06-22T18:32:28.651441419Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-22T18:32:28.652146244Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=704.435µs grafana | logger=migrator t=2025-06-22T18:32:28.686243219Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-22T18:32:28.686629233Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=387.134µs grafana | logger=migrator t=2025-06-22T18:32:28.691323134Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-22T18:32:28.692470335Z level=info msg="Migration successfully executed" id="create tag table" duration=1.150942ms grafana | logger=migrator t=2025-06-22T18:32:28.69813664Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-22T18:32:28.699864893Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.727723ms grafana | logger=migrator t=2025-06-22T18:32:28.703630589Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-22T18:32:28.704763801Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.132932ms grafana | logger=migrator t=2025-06-22T18:32:28.708384042Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-22T18:32:28.709374507Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=990.375µs grafana | logger=migrator t=2025-06-22T18:32:28.715485339Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-22T18:32:28.717051176Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.566497ms grafana | logger=migrator t=2025-06-22T18:32:28.721222538Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-22T18:32:28.735220575Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=13.992908ms grafana | logger=migrator t=2025-06-22T18:32:28.738569666Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-22T18:32:28.739134536Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=564.7µs grafana | logger=migrator t=2025-06-22T18:32:28.743883379Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-22T18:32:28.74475239Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=868.631µs grafana | logger=migrator t=2025-06-22T18:32:28.748289588Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-22T18:32:28.748747184Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=456.446µs grafana | logger=migrator t=2025-06-22T18:32:28.752092696Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-22T18:32:28.752993658Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=900.482µs grafana | logger=migrator t=2025-06-22T18:32:28.758891212Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-22T18:32:28.760065454Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.174282ms grafana | logger=migrator t=2025-06-22T18:32:28.764964482Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-22T18:32:28.766499598Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.535316ms grafana | logger=migrator t=2025-06-22T18:32:28.770098308Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-22T18:32:28.770126679Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=29.401µs grafana | logger=migrator t=2025-06-22T18:32:28.775650209Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-22T18:32:28.784183059Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.53468ms grafana | logger=migrator t=2025-06-22T18:32:28.788390721Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-22T18:32:28.792460619Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.068898ms grafana | logger=migrator t=2025-06-22T18:32:28.795942995Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-22T18:32:28.801164694Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.22103ms grafana | logger=migrator t=2025-06-22T18:32:28.804292857Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-22T18:32:28.809534147Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.24063ms grafana | logger=migrator t=2025-06-22T18:32:28.828877508Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-22T18:32:28.830264958Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.3872ms grafana | logger=migrator t=2025-06-22T18:32:28.833934942Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-22T18:32:28.840184388Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.253397ms grafana | logger=migrator t=2025-06-22T18:32:28.843442436Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-22T18:32:28.850962429Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=7.520213ms grafana | logger=migrator t=2025-06-22T18:32:28.857272597Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-22T18:32:28.857984663Z level=info msg="Migration successfully executed" id="create server_lock table" duration=711.916µs grafana | logger=migrator t=2025-06-22T18:32:28.860995152Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-22T18:32:28.861943467Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=947.935µs grafana | logger=migrator t=2025-06-22T18:32:28.865091031Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-22T18:32:28.865963532Z level=info msg="Migration successfully executed" id="create user auth token table" duration=872.081µs grafana | logger=migrator t=2025-06-22T18:32:28.870408433Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-22T18:32:28.871346577Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=939.994µs grafana | logger=migrator t=2025-06-22T18:32:28.875683805Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-22T18:32:28.876642269Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=958.464µs grafana | logger=migrator t=2025-06-22T18:32:28.881675132Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-22T18:32:28.883136704Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.461182ms grafana | logger=migrator t=2025-06-22T18:32:28.886230896Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-22T18:32:28.892867837Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.637521ms grafana | logger=migrator t=2025-06-22T18:32:28.895999441Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-22T18:32:28.896922894Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=923.153µs grafana | logger=migrator t=2025-06-22T18:32:28.901085805Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-22T18:32:28.906763741Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=5.677246ms grafana | logger=migrator t=2025-06-22T18:32:28.912051563Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-22T18:32:28.912865702Z level=info msg="Migration successfully executed" id="create cache_data table" duration=813.729µs grafana | logger=migrator t=2025-06-22T18:32:28.915882691Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-22T18:32:28.916815445Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=932.433µs grafana | logger=migrator t=2025-06-22T18:32:28.920225899Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-22T18:32:28.921039378Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=814.449µs grafana | logger=migrator t=2025-06-22T18:32:28.927007834Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-22T18:32:28.927969669Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=961.535µs grafana | logger=migrator t=2025-06-22T18:32:28.931264148Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-22T18:32:28.93128775Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=46.203µs grafana | logger=migrator t=2025-06-22T18:32:28.936442786Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-22T18:32:28.936601031Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=157.995µs grafana | logger=migrator t=2025-06-22T18:32:28.941933575Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-22T18:32:28.943355517Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.420932ms grafana | logger=migrator t=2025-06-22T18:32:28.946898875Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-22T18:32:28.948463552Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.564017ms grafana | logger=migrator t=2025-06-22T18:32:28.97463776Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-22T18:32:28.976414925Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.777275ms grafana | logger=migrator t=2025-06-22T18:32:28.981580441Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-22T18:32:28.981600222Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=18.801µs grafana | logger=migrator t=2025-06-22T18:32:28.985074729Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-22T18:32:28.986079395Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.004006ms grafana | logger=migrator t=2025-06-22T18:32:28.990080019Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-22T18:32:28.991599585Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.519236ms grafana | logger=migrator t=2025-06-22T18:32:29.002145117Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-22T18:32:29.004081727Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.9391ms grafana | logger=migrator t=2025-06-22T18:32:29.008687384Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-22T18:32:29.009713922Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.025998ms grafana | logger=migrator t=2025-06-22T18:32:29.01355016Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-22T18:32:29.019533437Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.983097ms grafana | logger=migrator t=2025-06-22T18:32:29.025246874Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-22T18:32:29.026206099Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=958.684µs grafana | logger=migrator t=2025-06-22T18:32:29.030802495Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-22T18:32:29.030969071Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=163.236µs grafana | logger=migrator t=2025-06-22T18:32:29.036328266Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-22T18:32:29.037837511Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.508925ms grafana | logger=migrator t=2025-06-22T18:32:29.042005341Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-22T18:32:29.043868789Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.863188ms grafana | logger=migrator t=2025-06-22T18:32:29.049515424Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-22T18:32:29.050545821Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.029937ms grafana | logger=migrator t=2025-06-22T18:32:29.055039714Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-22T18:32:29.055075315Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=37.891µs grafana | logger=migrator t=2025-06-22T18:32:29.058583882Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-22T18:32:29.060371327Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.786505ms grafana | logger=migrator t=2025-06-22T18:32:29.065864977Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-22T18:32:29.067740614Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.875488ms grafana | logger=migrator t=2025-06-22T18:32:29.071281962Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-22T18:32:29.072460105Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.177443ms grafana | logger=migrator t=2025-06-22T18:32:29.076031654Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-22T18:32:29.077193447Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.161003ms grafana | logger=migrator t=2025-06-22T18:32:29.08418483Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-22T18:32:29.092882185Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=8.698305ms grafana | logger=migrator t=2025-06-22T18:32:29.119900794Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-22T18:32:29.121405938Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.499934ms grafana | logger=migrator t=2025-06-22T18:32:29.125255879Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-22T18:32:29.126765333Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.512135ms grafana | logger=migrator t=2025-06-22T18:32:29.132068365Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-22T18:32:29.157112343Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=25.041587ms grafana | logger=migrator t=2025-06-22T18:32:29.160382241Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-22T18:32:29.186335532Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=25.952241ms grafana | logger=migrator t=2025-06-22T18:32:29.191443136Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-22T18:32:29.192398642Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=954.996µs grafana | logger=migrator t=2025-06-22T18:32:29.196635415Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-22T18:32:29.19760317Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=967.115µs grafana | logger=migrator t=2025-06-22T18:32:29.20147888Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-22T18:32:29.208854428Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=7.395208ms grafana | logger=migrator t=2025-06-22T18:32:29.214052266Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-22T18:32:29.223274451Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=9.222895ms grafana | logger=migrator t=2025-06-22T18:32:29.226816949Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-22T18:32:29.227832425Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.012796ms grafana | logger=migrator t=2025-06-22T18:32:29.230936917Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-22T18:32:29.231946745Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.009498ms grafana | logger=migrator t=2025-06-22T18:32:29.236855893Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-22T18:32:29.237958062Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.100889ms grafana | logger=migrator t=2025-06-22T18:32:29.265610255Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-22T18:32:29.267190501Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.579417ms grafana | logger=migrator t=2025-06-22T18:32:29.272173092Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-22T18:32:29.272201513Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=29.661µs grafana | logger=migrator t=2025-06-22T18:32:29.277565097Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-22T18:32:29.283834854Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.269317ms grafana | logger=migrator t=2025-06-22T18:32:29.287610022Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-22T18:32:29.293872089Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.261417ms grafana | logger=migrator t=2025-06-22T18:32:29.297162188Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-22T18:32:29.303433385Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.270927ms grafana | logger=migrator t=2025-06-22T18:32:29.306699024Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-22T18:32:29.307629907Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=926.933µs grafana | logger=migrator t=2025-06-22T18:32:29.3123825Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-22T18:32:29.313420917Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.037827ms grafana | logger=migrator t=2025-06-22T18:32:29.316716336Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-22T18:32:29.323527254Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.810198ms grafana | logger=migrator t=2025-06-22T18:32:29.329156818Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-22T18:32:29.335379343Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.221615ms grafana | logger=migrator t=2025-06-22T18:32:29.339842764Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-22T18:32:29.340933874Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.09085ms grafana | logger=migrator t=2025-06-22T18:32:29.345062643Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-22T18:32:29.351118743Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.051699ms grafana | logger=migrator t=2025-06-22T18:32:29.356035511Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-22T18:32:29.362454833Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.420882ms grafana | logger=migrator t=2025-06-22T18:32:29.365999623Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-22T18:32:29.366017553Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=18.62µs grafana | logger=migrator t=2025-06-22T18:32:29.36924873Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-22T18:32:29.370340449Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.090999ms grafana | logger=migrator t=2025-06-22T18:32:29.375827208Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-22T18:32:29.37753105Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.703322ms grafana | logger=migrator t=2025-06-22T18:32:29.382139437Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-22T18:32:29.383781497Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.64319ms grafana | logger=migrator t=2025-06-22T18:32:29.388427315Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-22T18:32:29.388444996Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=18.481µs grafana | logger=migrator t=2025-06-22T18:32:29.391526297Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-22T18:32:29.402017778Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=10.489301ms grafana | logger=migrator t=2025-06-22T18:32:29.418531895Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-22T18:32:29.428634852Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=10.103527ms grafana | logger=migrator t=2025-06-22T18:32:29.431983023Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-22T18:32:29.436556629Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.573156ms grafana | logger=migrator t=2025-06-22T18:32:29.440423069Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-22T18:32:29.446827822Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.403643ms grafana | logger=migrator t=2025-06-22T18:32:29.451959347Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-22T18:32:29.458271656Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.311119ms grafana | logger=migrator t=2025-06-22T18:32:29.461475672Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-22T18:32:29.461493723Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=18.431µs grafana | logger=migrator t=2025-06-22T18:32:29.463860869Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-22T18:32:29.464700559Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=838.95µs grafana | logger=migrator t=2025-06-22T18:32:29.473367293Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-22T18:32:29.484382562Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=11.014369ms grafana | logger=migrator t=2025-06-22T18:32:29.488215901Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-22T18:32:29.488232061Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=16.66µs grafana | logger=migrator t=2025-06-22T18:32:29.490073428Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-22T18:32:29.494598363Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.524365ms grafana | logger=migrator t=2025-06-22T18:32:29.496835523Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-22T18:32:29.497529479Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=693.296µs grafana | logger=migrator t=2025-06-22T18:32:29.50114394Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-22T18:32:29.505651083Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.506923ms grafana | logger=migrator t=2025-06-22T18:32:29.509092688Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-22T18:32:29.509671579Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=578.481µs grafana | logger=migrator t=2025-06-22T18:32:29.512508051Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-22T18:32:29.513245468Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=736.727µs grafana | logger=migrator t=2025-06-22T18:32:29.519064509Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-22T18:32:29.523754049Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.68915ms grafana | logger=migrator t=2025-06-22T18:32:29.526685435Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-22T18:32:29.527265686Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=579.911µs grafana | logger=migrator t=2025-06-22T18:32:29.530213133Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-22T18:32:29.53095749Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=744.257µs grafana | logger=migrator t=2025-06-22T18:32:29.535191013Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-22T18:32:29.536515132Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.323299ms grafana | logger=migrator t=2025-06-22T18:32:29.540135453Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-22T18:32:29.541642407Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.506564ms grafana | logger=migrator t=2025-06-22T18:32:29.574042811Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-22T18:32:29.574099863Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=62.192µs grafana | logger=migrator t=2025-06-22T18:32:29.57841188Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-22T18:32:29.580194974Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.781924ms grafana | logger=migrator t=2025-06-22T18:32:29.584591184Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-22T18:32:29.585395493Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=804.249µs grafana | logger=migrator t=2025-06-22T18:32:29.587636624Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-22T18:32:29.587966766Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-22T18:32:29.592544912Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-22T18:32:29.593350311Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=822.24µs grafana | logger=migrator t=2025-06-22T18:32:29.597334485Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-22T18:32:29.599124Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.789575ms grafana | logger=migrator t=2025-06-22T18:32:29.602260954Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-22T18:32:29.609461775Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.199951ms grafana | logger=migrator t=2025-06-22T18:32:29.613681088Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-22T18:32:29.614482336Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=803.189µs grafana | logger=migrator t=2025-06-22T18:32:29.617008318Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-22T18:32:29.617937202Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=923.014µs grafana | logger=migrator t=2025-06-22T18:32:29.620802386Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-22T18:32:29.62174827Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=945.444µs grafana | logger=migrator t=2025-06-22T18:32:29.627618623Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-22T18:32:29.628724612Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.105479ms grafana | logger=migrator t=2025-06-22T18:32:29.63165801Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-22T18:32:29.632725228Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.067198ms grafana | logger=migrator t=2025-06-22T18:32:29.635630513Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-22T18:32:29.635661064Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=30.691µs grafana | logger=migrator t=2025-06-22T18:32:29.640367935Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-22T18:32:29.640383426Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=15.951µs grafana | logger=migrator t=2025-06-22T18:32:29.647712561Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-22T18:32:29.656120435Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=8.408094ms grafana | logger=migrator t=2025-06-22T18:32:29.658835314Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-22T18:32:29.659155395Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=319.361µs grafana | logger=migrator t=2025-06-22T18:32:29.661610744Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-22T18:32:29.662468216Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=856.232µs grafana | logger=migrator t=2025-06-22T18:32:29.667176886Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-22T18:32:29.66756334Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=385.764µs grafana | logger=migrator t=2025-06-22T18:32:29.671323307Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-22T18:32:29.672383475Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.059768ms grafana | logger=migrator t=2025-06-22T18:32:29.676419631Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-22T18:32:29.677318894Z level=info msg="Migration successfully executed" id="create secrets table" duration=898.823µs grafana | logger=migrator t=2025-06-22T18:32:29.680996287Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-22T18:32:29.712246369Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=31.241292ms grafana | logger=migrator t=2025-06-22T18:32:29.717892384Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-22T18:32:29.725579693Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.686718ms grafana | logger=migrator t=2025-06-22T18:32:29.728886002Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-22T18:32:29.72909658Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=209.538µs grafana | logger=migrator t=2025-06-22T18:32:29.731421924Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-22T18:32:29.762449878Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=31.027754ms grafana | logger=migrator t=2025-06-22T18:32:29.76801731Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-22T18:32:29.799854925Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=31.837625ms grafana | logger=migrator t=2025-06-22T18:32:29.803557608Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-22T18:32:29.804518043Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=964.685µs grafana | logger=migrator t=2025-06-22T18:32:29.807849683Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-22T18:32:29.808860471Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.011108ms grafana | logger=migrator t=2025-06-22T18:32:29.814819447Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-22T18:32:29.815035264Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=215.487µs grafana | logger=migrator t=2025-06-22T18:32:29.818301542Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-22T18:32:29.819179814Z level=info msg="Migration successfully executed" id="create permission table" duration=880.562µs grafana | logger=migrator t=2025-06-22T18:32:29.822524386Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-22T18:32:29.823620735Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.095869ms grafana | logger=migrator t=2025-06-22T18:32:29.829020791Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-22T18:32:29.830723813Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.702832ms grafana | logger=migrator t=2025-06-22T18:32:29.834352524Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-22T18:32:29.83534918Z level=info msg="Migration successfully executed" id="create role table" duration=996.296µs grafana | logger=migrator t=2025-06-22T18:32:29.840613601Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-22T18:32:29.852935488Z level=info msg="Migration successfully executed" id="add column display_name" duration=12.321957ms grafana | logger=migrator t=2025-06-22T18:32:29.874656845Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-22T18:32:29.885109224Z level=info msg="Migration successfully executed" id="add column group_name" duration=10.45232ms grafana | logger=migrator t=2025-06-22T18:32:29.888739445Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-22T18:32:29.889868636Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.128751ms grafana | logger=migrator t=2025-06-22T18:32:29.893174756Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-22T18:32:29.894312007Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.136641ms grafana | logger=migrator t=2025-06-22T18:32:29.900284563Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-22T18:32:29.901444496Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.157473ms grafana | logger=migrator t=2025-06-22T18:32:29.90515363Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-22T18:32:29.906819281Z level=info msg="Migration successfully executed" id="create team role table" duration=1.66357ms grafana | logger=migrator t=2025-06-22T18:32:29.912421154Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-22T18:32:29.914159986Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.740323ms grafana | logger=migrator t=2025-06-22T18:32:29.919185948Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-22T18:32:29.920047569Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=861.681µs grafana | logger=migrator t=2025-06-22T18:32:29.923693232Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-22T18:32:29.924859294Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.165882ms grafana | logger=migrator t=2025-06-22T18:32:29.928476905Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-22T18:32:29.929477752Z level=info msg="Migration successfully executed" id="create user role table" duration=999.746µs grafana | logger=migrator t=2025-06-22T18:32:29.934200283Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-22T18:32:29.935351224Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.150441ms grafana | logger=migrator t=2025-06-22T18:32:29.938797449Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-22T18:32:29.939956792Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.159382ms grafana | logger=migrator t=2025-06-22T18:32:29.942978801Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-22T18:32:29.944141813Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.163202ms grafana | logger=migrator t=2025-06-22T18:32:29.951308383Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-22T18:32:29.953106147Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.796564ms grafana | logger=migrator t=2025-06-22T18:32:29.956948137Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-22T18:32:29.958855506Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.908489ms grafana | logger=migrator t=2025-06-22T18:32:29.961823233Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-22T18:32:29.96282158Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=997.667µs grafana | logger=migrator t=2025-06-22T18:32:29.965927942Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-22T18:32:29.97220824Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=6.280738ms grafana | logger=migrator t=2025-06-22T18:32:29.97716222Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-22T18:32:29.97827764Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.11523ms grafana | logger=migrator t=2025-06-22T18:32:29.981310169Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-22T18:32:29.982474262Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.163623ms grafana | logger=migrator t=2025-06-22T18:32:29.985510842Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-22T18:32:29.986614622Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.10356ms grafana | logger=migrator t=2025-06-22T18:32:29.99124587Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-22T18:32:29.992350489Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.104369ms grafana | logger=migrator t=2025-06-22T18:32:30.01277923Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-22T18:32:30.014312486Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.531095ms grafana | logger=migrator t=2025-06-22T18:32:30.018936633Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-22T18:32:30.021185125Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=2.247432ms grafana | logger=migrator t=2025-06-22T18:32:30.026567899Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-22T18:32:30.035654289Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.08617ms grafana | logger=migrator t=2025-06-22T18:32:30.039131275Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-22T18:32:30.047678015Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.54635ms grafana | logger=migrator t=2025-06-22T18:32:30.05031225Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-22T18:32:30.056416151Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.102501ms grafana | logger=migrator t=2025-06-22T18:32:30.061713723Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-22T18:32:30.069783306Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.068353ms grafana | logger=migrator t=2025-06-22T18:32:30.072849307Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-22T18:32:30.073659996Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=810.079µs grafana | logger=migrator t=2025-06-22T18:32:30.076454177Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-22T18:32:30.077268297Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=816.24µs grafana | logger=migrator t=2025-06-22T18:32:30.083336957Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-22T18:32:30.084419036Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.081279ms grafana | logger=migrator t=2025-06-22T18:32:30.087313201Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-22T18:32:30.095832149Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=8.518148ms grafana | logger=migrator t=2025-06-22T18:32:30.100604803Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-22T18:32:30.101489624Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=884.721µs grafana | logger=migrator t=2025-06-22T18:32:30.104067698Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-22T18:32:30.105140507Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.075179ms grafana | logger=migrator t=2025-06-22T18:32:30.108220659Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-22T18:32:30.109169263Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=948.064µs grafana | logger=migrator t=2025-06-22T18:32:30.114219876Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-22T18:32:30.115531863Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.311967ms grafana | logger=migrator t=2025-06-22T18:32:30.119651363Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-22T18:32:30.119676844Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=26.801µs grafana | logger=migrator t=2025-06-22T18:32:30.123060506Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-22T18:32:30.124086783Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.024907ms grafana | logger=migrator t=2025-06-22T18:32:30.129459908Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-22T18:32:30.12950695Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=47.292µs grafana | logger=migrator t=2025-06-22T18:32:30.131823684Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-22T18:32:30.132340592Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=516.298µs grafana | logger=migrator t=2025-06-22T18:32:30.13530734Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-22T18:32:30.136011686Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=704.376µs grafana | logger=migrator t=2025-06-22T18:32:30.160601806Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-22T18:32:30.16178759Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.185494ms grafana | logger=migrator t=2025-06-22T18:32:30.166865164Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-22T18:32:30.167349902Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=483.587µs grafana | logger=migrator t=2025-06-22T18:32:30.170472485Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-22T18:32:30.171036845Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=564.34µs grafana | logger=migrator t=2025-06-22T18:32:30.174036444Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-22T18:32:30.174990908Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=953.635µs grafana | logger=migrator t=2025-06-22T18:32:30.177950266Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-22T18:32:30.179117938Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.161672ms grafana | logger=migrator t=2025-06-22T18:32:30.184457121Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-22T18:32:30.192868806Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.411055ms grafana | logger=migrator t=2025-06-22T18:32:30.19574256Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-22T18:32:30.195760021Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=17.281µs grafana | logger=migrator t=2025-06-22T18:32:30.19876896Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-22T18:32:30.199584639Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=816.519µs grafana | logger=migrator t=2025-06-22T18:32:30.204597061Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-22T18:32:30.205646179Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.048168ms grafana | logger=migrator t=2025-06-22T18:32:30.209088904Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-22T18:32:30.210274287Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.184923ms grafana | logger=migrator t=2025-06-22T18:32:30.21368482Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-22T18:32:30.22224472Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.55937ms grafana | logger=migrator t=2025-06-22T18:32:30.226860668Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-22T18:32:30.227970558Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.10911ms grafana | logger=migrator t=2025-06-22T18:32:30.232301515Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-22T18:32:30.233441036Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.138901ms grafana | logger=migrator t=2025-06-22T18:32:30.237005706Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-22T18:32:30.260945333Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.928346ms grafana | logger=migrator t=2025-06-22T18:32:30.265571211Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-22T18:32:30.266730862Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.158481ms grafana | logger=migrator t=2025-06-22T18:32:30.270198318Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-22T18:32:30.271437113Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.238365ms grafana | logger=migrator t=2025-06-22T18:32:30.274714162Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-22T18:32:30.275929376Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.214963ms grafana | logger=migrator t=2025-06-22T18:32:30.297664493Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-22T18:32:30.299958997Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=2.294304ms grafana | logger=migrator t=2025-06-22T18:32:30.304575934Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-22T18:32:30.305226018Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=659.154µs grafana | logger=migrator t=2025-06-22T18:32:30.309337366Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-22T18:32:30.310637434Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.299058ms grafana | logger=migrator t=2025-06-22T18:32:30.31548669Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-22T18:32:30.327137962Z level=info msg="Migration successfully executed" id="add provisioning column" duration=11.652062ms grafana | logger=migrator t=2025-06-22T18:32:30.330595587Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-22T18:32:30.339054403Z level=info msg="Migration successfully executed" id="add type column" duration=8.458766ms grafana | logger=migrator t=2025-06-22T18:32:30.342200667Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-22T18:32:30.342849461Z level=info msg="Migration successfully executed" id="create entity_events table" duration=642.893µs grafana | logger=migrator t=2025-06-22T18:32:30.349418529Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-22T18:32:30.350656364Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.240375ms grafana | logger=migrator t=2025-06-22T18:32:30.353815588Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-22T18:32:30.354659719Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-22T18:32:30.358918443Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-22T18:32:30.359861757Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-22T18:32:30.363256591Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-22T18:32:30.364609219Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.377749ms grafana | logger=migrator t=2025-06-22T18:32:30.369742136Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-22T18:32:30.370903347Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.160711ms grafana | logger=migrator t=2025-06-22T18:32:30.374365043Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-22T18:32:30.375986372Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.619389ms grafana | logger=migrator t=2025-06-22T18:32:30.380952362Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-22T18:32:30.382860371Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.908229ms grafana | logger=migrator t=2025-06-22T18:32:30.388814747Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-22T18:32:30.390486097Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.67163ms grafana | logger=migrator t=2025-06-22T18:32:30.394353818Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-22T18:32:30.39553725Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.184352ms grafana | logger=migrator t=2025-06-22T18:32:30.404696042Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-22T18:32:30.406253769Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.559486ms grafana | logger=migrator t=2025-06-22T18:32:30.411586442Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-22T18:32:30.41293227Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.344288ms grafana | logger=migrator t=2025-06-22T18:32:30.416464429Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-22T18:32:30.417659162Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.194523ms grafana | logger=migrator t=2025-06-22T18:32:30.422743416Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-22T18:32:30.424053773Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.309787ms grafana | logger=migrator t=2025-06-22T18:32:30.443021741Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-22T18:32:30.445002603Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.980662ms grafana | logger=migrator t=2025-06-22T18:32:30.449997734Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-22T18:32:30.472427537Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=22.429233ms grafana | logger=migrator t=2025-06-22T18:32:30.476974721Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-22T18:32:30.484803955Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.828844ms grafana | logger=migrator t=2025-06-22T18:32:30.488790749Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-22T18:32:30.498065336Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.273527ms grafana | logger=migrator t=2025-06-22T18:32:30.501917575Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-22T18:32:30.502222026Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=303.901µs grafana | logger=migrator t=2025-06-22T18:32:30.506703029Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-22T18:32:30.515497937Z level=info msg="Migration successfully executed" id="add share column" duration=8.794838ms grafana | logger=migrator t=2025-06-22T18:32:30.519880967Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-22T18:32:30.520035722Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=154.285µs grafana | logger=migrator t=2025-06-22T18:32:30.523411094Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-22T18:32:30.524215643Z level=info msg="Migration successfully executed" id="create file table" duration=803.789µs grafana | logger=migrator t=2025-06-22T18:32:30.52880848Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-22T18:32:30.530958708Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.151558ms grafana | logger=migrator t=2025-06-22T18:32:30.536478848Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-22T18:32:30.537685942Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.206743ms grafana | logger=migrator t=2025-06-22T18:32:30.541328463Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-22T18:32:30.542231427Z level=info msg="Migration successfully executed" id="create file_meta table" duration=900.553µs grafana | logger=migrator t=2025-06-22T18:32:30.553273116Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-22T18:32:30.55531609Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=2.040834ms grafana | logger=migrator t=2025-06-22T18:32:30.560118884Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-22T18:32:30.560297741Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=179.917µs grafana | logger=migrator t=2025-06-22T18:32:30.577567516Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-22T18:32:30.577750573Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=181.067µs grafana | logger=migrator t=2025-06-22T18:32:30.583274393Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-22T18:32:30.584319842Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.046009ms grafana | logger=migrator t=2025-06-22T18:32:30.589077283Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-22T18:32:30.589368934Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=289.771µs grafana | logger=migrator t=2025-06-22T18:32:30.593070189Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-22T18:32:30.595267038Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.173538ms grafana | logger=migrator t=2025-06-22T18:32:30.599152459Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-22T18:32:30.608788488Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.636519ms grafana | logger=migrator t=2025-06-22T18:32:30.612204712Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-22T18:32:30.612603237Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=397.644µs grafana | logger=migrator t=2025-06-22T18:32:30.616914672Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-22T18:32:30.61878719Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.870968ms grafana | logger=migrator t=2025-06-22T18:32:30.622666721Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-22T18:32:30.623147028Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=479.407µs grafana | logger=migrator t=2025-06-22T18:32:30.62925711Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-22T18:32:30.629733377Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=475.537µs grafana | logger=migrator t=2025-06-22T18:32:30.634697277Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-22T18:32:30.635591669Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=891.232µs grafana | logger=migrator t=2025-06-22T18:32:30.640744246Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-22T18:32:30.649798014Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.053418ms grafana | logger=migrator t=2025-06-22T18:32:30.653049502Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-22T18:32:30.660233073Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.18103ms grafana | logger=migrator t=2025-06-22T18:32:30.663534902Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-22T18:32:30.664685153Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.149711ms grafana | logger=migrator t=2025-06-22T18:32:30.669182177Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-22T18:32:30.742745072Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=73.562305ms grafana | logger=migrator t=2025-06-22T18:32:30.746957865Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-22T18:32:30.749087322Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.129057ms grafana | logger=migrator t=2025-06-22T18:32:30.75813496Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-22T18:32:30.759332543Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.197073ms grafana | logger=migrator t=2025-06-22T18:32:30.764633296Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-22T18:32:30.797048361Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=32.415966ms grafana | logger=migrator t=2025-06-22T18:32:30.801237692Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-22T18:32:30.809799352Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=8.56077ms grafana | logger=migrator t=2025-06-22T18:32:30.81414367Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-22T18:32:30.814767002Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=622.372µs grafana | logger=migrator t=2025-06-22T18:32:30.819371129Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-22T18:32:30.819736043Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=365.695µs grafana | logger=migrator t=2025-06-22T18:32:30.823391795Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-22T18:32:30.823847011Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=458.996µs grafana | logger=migrator t=2025-06-22T18:32:30.829780056Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-22T18:32:30.830016105Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=237.948µs grafana | logger=migrator t=2025-06-22T18:32:30.834048181Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-22T18:32:30.83429183Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=242.629µs grafana | logger=migrator t=2025-06-22T18:32:30.838285004Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-22T18:32:30.839155896Z level=info msg="Migration successfully executed" id="create folder table" duration=873.032µs grafana | logger=migrator t=2025-06-22T18:32:30.861795477Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-22T18:32:30.862838684Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.042587ms grafana | logger=migrator t=2025-06-22T18:32:30.867903868Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-22T18:32:30.868879293Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=972.135µs grafana | logger=migrator t=2025-06-22T18:32:30.873666977Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-22T18:32:30.873687517Z level=info msg="Migration successfully executed" id="Update folder title length" duration=21.28µs grafana | logger=migrator t=2025-06-22T18:32:30.876963116Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-22T18:32:30.877847988Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=883.782µs grafana | logger=migrator t=2025-06-22T18:32:30.887521289Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-22T18:32:30.888527255Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.005486ms grafana | logger=migrator t=2025-06-22T18:32:30.893091991Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-22T18:32:30.893933541Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=840.63µs grafana | logger=migrator t=2025-06-22T18:32:30.897302853Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-22T18:32:30.897610634Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=307.521µs grafana | logger=migrator t=2025-06-22T18:32:30.901738584Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-22T18:32:30.901924681Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=185.697µs grafana | logger=migrator t=2025-06-22T18:32:30.904785544Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-22T18:32:30.905565352Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=780.058µs grafana | logger=migrator t=2025-06-22T18:32:30.909279938Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-22T18:32:30.910087187Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=806.718µs grafana | logger=migrator t=2025-06-22T18:32:30.921754359Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-22T18:32:30.922505976Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=751.257µs grafana | logger=migrator t=2025-06-22T18:32:30.927690654Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-22T18:32:30.928552945Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=861.601µs grafana | logger=migrator t=2025-06-22T18:32:30.932887013Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-22T18:32:30.93364726Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=760.947µs grafana | logger=migrator t=2025-06-22T18:32:30.937576003Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-22T18:32:30.9383374Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=762.017µs grafana | logger=migrator t=2025-06-22T18:32:30.94329549Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-22T18:32:30.943950054Z level=info msg="Migration successfully executed" id="create anon_device table" duration=654.204µs grafana | logger=migrator t=2025-06-22T18:32:30.948123154Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-22T18:32:30.948996837Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=872.653µs grafana | logger=migrator t=2025-06-22T18:32:30.952507354Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-22T18:32:30.953394005Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=885.741µs grafana | logger=migrator t=2025-06-22T18:32:30.956622722Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-22T18:32:30.957352889Z level=info msg="Migration successfully executed" id="create signing_key table" duration=728.007µs grafana | logger=migrator t=2025-06-22T18:32:30.96179479Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-22T18:32:30.962806687Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.011797ms grafana | logger=migrator t=2025-06-22T18:32:30.966172229Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-22T18:32:30.967184675Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.012386ms grafana | logger=migrator t=2025-06-22T18:32:30.970262007Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-22T18:32:30.970591778Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=329.851µs grafana | logger=migrator t=2025-06-22T18:32:30.976065717Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-22T18:32:30.983106813Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.041985ms grafana | logger=migrator t=2025-06-22T18:32:31.010978953Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-22T18:32:31.011511582Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=534.199µs grafana | logger=migrator t=2025-06-22T18:32:31.01782249Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-22T18:32:31.017840971Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=19.931µs grafana | logger=migrator t=2025-06-22T18:32:31.022353635Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-22T18:32:31.023290178Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=936.523µs grafana | logger=migrator t=2025-06-22T18:32:31.026591008Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-22T18:32:31.026607888Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=17.51µs grafana | logger=migrator t=2025-06-22T18:32:31.029969121Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-22T18:32:31.032093667Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.123146ms grafana | logger=migrator t=2025-06-22T18:32:31.037439761Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-22T18:32:31.039697083Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=2.259832ms grafana | logger=migrator t=2025-06-22T18:32:31.044993065Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-22T18:32:31.046941536Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.948631ms grafana | logger=migrator t=2025-06-22T18:32:31.050840627Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-22T18:32:31.052648273Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.806566ms grafana | logger=migrator t=2025-06-22T18:32:31.059398347Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-22T18:32:31.060255058Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=857.551µs grafana | logger=migrator t=2025-06-22T18:32:31.06361679Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-22T18:32:31.064149559Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=533.829µs grafana | logger=migrator t=2025-06-22T18:32:31.069859986Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-22T18:32:31.070922754Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=1.060148ms grafana | logger=migrator t=2025-06-22T18:32:31.076167595Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-22T18:32:31.07769153Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.524525ms grafana | logger=migrator t=2025-06-22T18:32:31.080835834Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-22T18:32:31.081861381Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.024387ms grafana | logger=migrator t=2025-06-22T18:32:31.08623913Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-22T18:32:31.095195764Z level=info msg="Migration successfully executed" id="add stack_id column" duration=8.956214ms grafana | logger=migrator t=2025-06-22T18:32:31.099040973Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-22T18:32:31.107403116Z level=info msg="Migration successfully executed" id="add region_slug column" duration=8.362123ms grafana | logger=migrator t=2025-06-22T18:32:31.110721886Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-22T18:32:31.117526414Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=6.804098ms grafana | logger=migrator t=2025-06-22T18:32:31.121419394Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-22T18:32:31.128775131Z level=info msg="Migration successfully executed" id="add migration uid column" duration=7.355237ms grafana | logger=migrator t=2025-06-22T18:32:31.155548031Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-22T18:32:31.155935156Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=386.045µs grafana | logger=migrator t=2025-06-22T18:32:31.160950997Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-22T18:32:31.162882407Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.93133ms grafana | logger=migrator t=2025-06-22T18:32:31.166793269Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-22T18:32:31.175823806Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.030347ms grafana | logger=migrator t=2025-06-22T18:32:31.180439613Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-22T18:32:31.180613799Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=173.856µs grafana | logger=migrator t=2025-06-22T18:32:31.183514584Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-22T18:32:31.18477141Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.253956ms grafana | logger=migrator t=2025-06-22T18:32:31.188741354Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-22T18:32:31.215330378Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=26.587684ms grafana | logger=migrator t=2025-06-22T18:32:31.219332542Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-22T18:32:31.2200603Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=726.708µs grafana | logger=migrator t=2025-06-22T18:32:31.225627121Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-22T18:32:31.227795639Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=2.164708ms grafana | logger=migrator t=2025-06-22T18:32:31.23193873Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-22T18:32:31.232604924Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=665.704µs grafana | logger=migrator t=2025-06-22T18:32:31.237344966Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-22T18:32:31.238972504Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.626668ms grafana | logger=migrator t=2025-06-22T18:32:31.246157815Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-22T18:32:31.271216273Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=25.056298ms grafana | logger=migrator t=2025-06-22T18:32:31.276877318Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-22T18:32:31.277850333Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=975.175µs grafana | logger=migrator t=2025-06-22T18:32:31.302065401Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-22T18:32:31.30398772Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.921989ms grafana | logger=migrator t=2025-06-22T18:32:31.308556106Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-22T18:32:31.308935069Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=378.533µs grafana | logger=migrator t=2025-06-22T18:32:31.313370181Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-22T18:32:31.314235572Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=864.381µs grafana | logger=migrator t=2025-06-22T18:32:31.318989504Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-22T18:32:31.328765278Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=9.776044ms grafana | logger=migrator t=2025-06-22T18:32:31.333647155Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-22T18:32:31.344794679Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=11.144844ms grafana | logger=migrator t=2025-06-22T18:32:31.348017476Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-22T18:32:31.355769257Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=7.750521ms grafana | logger=migrator t=2025-06-22T18:32:31.361802125Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-22T18:32:31.374722424Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=12.920459ms grafana | logger=migrator t=2025-06-22T18:32:31.378687907Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-22T18:32:31.385688191Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=6.996794ms grafana | logger=migrator t=2025-06-22T18:32:31.389113235Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-22T18:32:31.398658081Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=9.544196ms grafana | logger=migrator t=2025-06-22T18:32:31.402840003Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-22T18:32:31.403569149Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=728.806µs grafana | logger=migrator t=2025-06-22T18:32:31.407489331Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-22T18:32:31.443490655Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=35.999314ms grafana | logger=migrator t=2025-06-22T18:32:31.452872656Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-22T18:32:31.462352389Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=9.476883ms grafana | logger=migrator t=2025-06-22T18:32:31.466073994Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-22T18:32:31.475687483Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=9.612809ms grafana | logger=migrator t=2025-06-22T18:32:31.481271405Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-22T18:32:31.489071228Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=7.798243ms grafana | logger=migrator t=2025-06-22T18:32:31.492682688Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-22T18:32:31.505069777Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=12.391559ms grafana | logger=migrator t=2025-06-22T18:32:31.508443349Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-22T18:32:31.50845923Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=15.29µs grafana | logger=migrator t=2025-06-22T18:32:31.518254575Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-22T18:32:31.518280926Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=27.701µs grafana | logger=migrator t=2025-06-22T18:32:31.523467644Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-22T18:32:31.533952344Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=10.48447ms grafana | logger=migrator t=2025-06-22T18:32:31.540396547Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-22T18:32:31.549902192Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.504844ms grafana | logger=migrator t=2025-06-22T18:32:31.554752828Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-22T18:32:31.555155332Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=401.544µs grafana | logger=migrator t=2025-06-22T18:32:31.558779193Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-22T18:32:31.559069594Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=289.971µs grafana | logger=migrator t=2025-06-22T18:32:31.562639094Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-22T18:32:31.573133994Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=10.495421ms grafana | logger=migrator t=2025-06-22T18:32:31.596602624Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-22T18:32:31.604948087Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=8.342882ms grafana | logger=migrator t=2025-06-22T18:32:31.609637237Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-22T18:32:31.617539582Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=7.902105ms grafana | logger=migrator t=2025-06-22T18:32:31.620951997Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-22T18:32:31.630901797Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=9.94895ms grafana | logger=migrator t=2025-06-22T18:32:31.634886922Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-22T18:32:31.635407611Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=517.728µs grafana | logger=migrator t=2025-06-22T18:32:31.639201348Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-22T18:32:31.648619099Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=9.420271ms grafana | logger=migrator t=2025-06-22T18:32:31.664358029Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-22T18:32:31.675665759Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=11.31222ms grafana | logger=migrator t=2025-06-22T18:32:31.680767884Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-22T18:32:31.681039034Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=274.68µs grafana | logger=migrator t=2025-06-22T18:32:31.684654075Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-22T18:32:31.685287928Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=633.614µs grafana | logger=migrator t=2025-06-22T18:32:31.688761284Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-22T18:32:31.689977288Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.215703ms grafana | logger=migrator t=2025-06-22T18:32:31.695256879Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-22T18:32:31.695395374Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=139.545µs grafana | logger=migrator t=2025-06-22T18:32:31.701179623Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-22T18:32:31.701328919Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=150.086µs grafana | logger=migrator t=2025-06-22T18:32:31.705337605Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-22T18:32:31.705994718Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=657.033µs grafana | logger=migrator t=2025-06-22T18:32:31.711039231Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-22T18:32:31.720908878Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=9.870997ms grafana | logger=migrator t=2025-06-22T18:32:31.750160479Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-22T18:32:31.762470325Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=12.310806ms grafana | logger=migrator t=2025-06-22T18:32:31.768838395Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-22T18:32:31.769645265Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=806.49µs grafana | logger=migrator t=2025-06-22T18:32:31.774701038Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-22T18:32:31.775983495Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.282176ms grafana | logger=migrator t=2025-06-22T18:32:31.782060375Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-22T18:32:31.792053877Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=9.993343ms grafana | logger=migrator t=2025-06-22T18:32:31.796124834Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-22T18:32:31.803825684Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=7.704289ms grafana | logger=migrator t=2025-06-22T18:32:31.814081025Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-22T18:32:31.814136257Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-22T18:32:31.814404897Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-22T18:32:31.814420908Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=340.113µs grafana | logger=migrator t=2025-06-22T18:32:31.818996533Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-22T18:32:31.819516392Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=519.488µs grafana | logger=migrator t=2025-06-22T18:32:31.82748094Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-22T18:32:31.829370129Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.889439ms grafana | logger=migrator t=2025-06-22T18:32:31.833421885Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-22T18:32:31.835969258Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=2.550123ms grafana | logger=migrator t=2025-06-22T18:32:31.840070547Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-22T18:32:31.841412065Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.341028ms grafana | logger=migrator t=2025-06-22T18:32:31.846837792Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-22T18:32:31.848811424Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.969921ms grafana | logger=migrator t=2025-06-22T18:32:31.855843038Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-22T18:32:31.865765808Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=9.92183ms grafana | logger=migrator t=2025-06-22T18:32:31.869099319Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-22T18:32:31.878682916Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=9.582867ms grafana | logger=migrator t=2025-06-22T18:32:31.892637232Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-22T18:32:31.904645027Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=12.005535ms grafana | logger=migrator t=2025-06-22T18:32:31.909286456Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-22T18:32:31.917760283Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=8.472007ms grafana | logger=migrator t=2025-06-22T18:32:31.923407017Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-22T18:32:31.923662566Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-22T18:32:31.923682127Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=275.43µs grafana | logger=migrator t=2025-06-22T18:32:31.927994233Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-22T18:32:31.929235628Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.241275ms grafana | logger=migrator t=2025-06-22T18:32:31.933107688Z level=info msg="migrations completed" performed=654 skipped=0 duration=5.520526536s grafana | logger=migrator t=2025-06-22T18:32:31.933750481Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-22T18:32:31.951184413Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-22T18:32:31.951461144Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-22T18:32:31.956608149Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-22T18:32:32.04823705Z level=info msg="Restored cache from database" duration=484.287µs grafana | logger=resource-migrator t=2025-06-22T18:32:32.056542011Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-22T18:32:32.056618224Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-22T18:32:32.064072834Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-22T18:32:32.065090571Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=1.017317ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.093590063Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-22T18:32:32.0937814Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=191.377µs grafana | logger=resource-migrator t=2025-06-22T18:32:32.101248091Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-22T18:32:32.101539722Z level=info msg="Migration successfully executed" id="drop table resource" duration=291.171µs grafana | logger=resource-migrator t=2025-06-22T18:32:32.106367737Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-22T18:32:32.107538469Z level=info msg="Migration successfully executed" id="create table resource" duration=1.170462ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.111865396Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-22T18:32:32.113909751Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=2.043454ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.119027275Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-22T18:32:32.119205192Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=177.096µs grafana | logger=resource-migrator t=2025-06-22T18:32:32.125015533Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-22T18:32:32.126710114Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.694041ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.13099811Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-22T18:32:32.132425881Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.427741ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.136864262Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-22T18:32:32.138136638Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.272016ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.142978714Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-22T18:32:32.143321206Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=342.353µs grafana | logger=resource-migrator t=2025-06-22T18:32:32.182929851Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-22T18:32:32.18453665Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.606469ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.192622373Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-22T18:32:32.19393515Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.312507ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.200960425Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-22T18:32:32.201206434Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=247.239µs grafana | logger=resource-migrator t=2025-06-22T18:32:32.207983029Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-22T18:32:32.210103536Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=2.116707ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.216735976Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-22T18:32:32.218116667Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.38024ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.225622278Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-22T18:32:32.227112152Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.489674ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.231480601Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-22T18:32:32.241479993Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=10.021743ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.246947161Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-22T18:32:32.257743882Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=10.794611ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.262796786Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-22T18:32:32.264343861Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.547045ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.268513543Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-22T18:32:32.270544606Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=2.029713ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.276331246Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-22T18:32:32.286332048Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.003842ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.292103977Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-22T18:32:32.302321638Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=10.216861ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.333998345Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-22T18:32:32.33412682Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-22T18:32:32.334907808Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=908.893µs grafana | logger=resource-migrator t=2025-06-22T18:32:32.34073843Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-22T18:32:32.342602187Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.863227ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.346471748Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-22T18:32:32.357101402Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=10.629274ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.3647618Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-22T18:32:32.367078764Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=2.315894ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.372170089Z level=info msg="migrations completed" performed=26 skipped=0 duration=308.135656ms grafana | logger=resource-migrator t=2025-06-22T18:32:32.373115283Z level=info msg="Unlocking database" grafana | t=2025-06-22T18:32:32.373478696Z level=info caller=logger.go:214 time=2025-06-22T18:32:32.373454695Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-22T18:32:32.384022238Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-22T18:32:32.427437022Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-22T18:32:32.427471613Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-22T18:32:32.427613188Z level=info msg="Plugins loaded" count=53 duration=43.59157ms grafana | logger=query_data t=2025-06-22T18:32:32.433027344Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-22T18:32:32.437457735Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-22T18:32:32.44865397Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-22T18:32:32.454973459Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-22T18:32:32.45499159Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-22T18:32:32.45719818Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=grafanaStorageLogger t=2025-06-22T18:32:32.457507661Z level=info msg="Storage starting" grafana | logger=plugin.backgroundinstaller t=2025-06-22T18:32:32.459021126Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=ngalert.state.manager t=2025-06-22T18:32:32.460982777Z level=info msg="Warming state cache for startup" grafana | logger=http.server t=2025-06-22T18:32:32.462074237Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=ngalert.multiorg.alertmanager t=2025-06-22T18:32:32.463089823Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=plugins.update.checker t=2025-06-22T18:32:32.55432072Z level=info msg="Update check succeeded" duration=95.712119ms grafana | logger=grafana.update.checker t=2025-06-22T18:32:32.559823369Z level=info msg="Update check succeeded" duration=102.021607ms grafana | logger=sqlstore.transactions t=2025-06-22T18:32:32.567146864Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=sqlstore.transactions t=2025-06-22T18:32:32.6230643Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 grafana | logger=ngalert.state.manager t=2025-06-22T18:32:32.62414854Z level=info msg="State cache has been initialized" states=0 duration=163.163383ms grafana | logger=ngalert.scheduler t=2025-06-22T18:32:32.624202752Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-22T18:32:32.624359678Z level=info msg=starting first_tick=2025-06-22T18:32:40Z grafana | logger=provisioning.datasources t=2025-06-22T18:32:32.631199725Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-22T18:32:32.639489066Z level=info msg="Patterns update finished" duration=181.465577ms grafana | logger=provisioning.alerting t=2025-06-22T18:32:32.660465936Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-22T18:32:32.660500058Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-22T18:32:32.663012789Z level=info msg="starting to provision dashboards" grafana | logger=grafana-apiserver t=2025-06-22T18:32:32.707938577Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-22T18:32:32.709172661Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-22T18:32:32.710679376Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-22T18:32:32.711415622Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-22T18:32:32.71218933Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-22T18:32:32.714745933Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-22T18:32:32.715467479Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-22T18:32:32.71601778Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-22T18:32:32.716570489Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-22T18:32:32.765679809Z level=info msg="app registry initialized" grafana | logger=provisioning.dashboard t=2025-06-22T18:32:33.279512899Z level=info msg="finished to provision dashboards" grafana | logger=plugin.installer t=2025-06-22T18:32:33.285484875Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-22T18:32:33.414011823Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.18 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-22T18:32:33.449088264Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-22T18:32:33.449111725Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=990.072847ms grafana | logger=plugin.backgroundinstaller t=2025-06-22T18:32:33.449137255Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-22T18:32:33.706742141Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-22T18:32:33.762255752Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-22T18:32:33.77765387Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-22T18:32:33.777673581Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=328.531425ms grafana | logger=plugin.backgroundinstaller t=2025-06-22T18:32:33.777695211Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=plugin.installer t=2025-06-22T18:32:34.033979879Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-22T18:32:34.092043683Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-22T18:32:34.108097564Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-22T18:32:34.108117255Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=330.417284ms grafana | logger=plugin.backgroundinstaller t=2025-06-22T18:32:34.108169217Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=plugin.installer t=2025-06-22T18:32:34.374827701Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-22T18:32:34.434213813Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.3 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-22T18:32:34.452963882Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-22T18:32:34.452986613Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=344.808236ms grafana | logger=infra.usagestats t=2025-06-22T18:33:59.463392748Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-22 18:32:22,581] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,581] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,581] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,582] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,582] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,582] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,582] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,582] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,582] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,582] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,582] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,582] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,582] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,582] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,582] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,583] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,583] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,583] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,586] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@221af3c0 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,590] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-22 18:32:22,594] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-22 18:32:22,602] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-22 18:32:22,618] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-22 18:32:22,619] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-22 18:32:22,626] INFO Socket connection established, initiating session, client: /172.17.0.5:39468, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-22 18:32:22,678] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000226250000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-22 18:32:22,795] INFO Session: 0x100000226250000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:22,796] INFO EventThread shut down for session: 0x100000226250000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-22 18:32:23,567] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-22 18:32:23,848] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-22 18:32:23,921] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-22 18:32:23,922] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-22 18:32:23,922] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-22 18:32:23,935] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-22 18:32:23,939] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,939] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,939] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,939] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,939] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,939] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,939] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,939] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,939] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,939] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,939] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,939] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,940] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,940] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,940] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,940] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,940] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,940] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,941] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-22 18:32:23,945] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-22 18:32:23,950] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-22 18:32:23,952] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-22 18:32:23,957] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-22 18:32:23,967] INFO Socket connection established, initiating session, client: /172.17.0.5:39470, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-22 18:32:23,979] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000226250001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-22 18:32:23,983] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-22 18:32:24,273] INFO Cluster ID = Y94SwhAjTxOcMpy5L2vWew (kafka.server.KafkaServer) kafka | [2025-06-22 18:32:24,277] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-22 18:32:24,324] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-22 18:32:24,356] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-22 18:32:24,356] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-22 18:32:24,356] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-22 18:32:24,358] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-22 18:32:24,390] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-22 18:32:24,392] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-22 18:32:24,404] INFO Loaded 0 logs in 15ms. (kafka.log.LogManager) kafka | [2025-06-22 18:32:24,405] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-22 18:32:24,407] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-22 18:32:24,418] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-22 18:32:24,464] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-22 18:32:24,478] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-22 18:32:24,490] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-22 18:32:24,528] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-22 18:32:24,880] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-22 18:32:24,884] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-22 18:32:24,908] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-22 18:32:24,908] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-22 18:32:24,909] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-22 18:32:24,913] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-22 18:32:24,917] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-22 18:32:24,935] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-22 18:32:24,937] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-22 18:32:24,939] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-22 18:32:24,940] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-22 18:32:24,954] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-22 18:32:24,986] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-22 18:32:25,013] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750617144999,1750617144999,1,0,0,72057603267821569,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-22 18:32:25,016] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-22 18:32:25,096] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-22 18:32:25,104] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-22 18:32:25,108] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-22 18:32:25,109] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-22 18:32:25,119] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-22 18:32:25,130] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,134] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,136] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:32:25,138] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-22 18:32:25,143] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:32:25,162] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-22 18:32:25,168] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-22 18:32:25,171] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-22 18:32:25,172] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-22 18:32:25,172] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,180] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,185] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,188] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,218] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-22 18:32:25,221] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,235] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,247] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-22 18:32:25,256] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-22 18:32:25,269] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-22 18:32:25,272] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,273] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,273] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,274] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,283] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-22 18:32:25,284] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,284] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,284] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,285] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-22 18:32:25,286] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,290] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-22 18:32:25,294] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-22 18:32:25,294] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-22 18:32:25,295] INFO Kafka startTimeMs: 1750617145288 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-22 18:32:25,298] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-22 18:32:25,301] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-22 18:32:25,302] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-22 18:32:25,304] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-22 18:32:25,305] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-22 18:32:25,305] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-22 18:32:25,305] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-22 18:32:25,312] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-22 18:32:25,312] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,322] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-22 18:32:25,326] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,326] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,326] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,326] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,328] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,343] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:25,388] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-22 18:32:25,424] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-22 18:32:25,462] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-22 18:32:30,345] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:30,345] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:59,481] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-22 18:32:59,481] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-22 18:32:59,483] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:59,491] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:59,527] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(WIkwDHeHSyepRFt-FUY17w),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:59,528] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:59,530] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,530] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-22 18:32:59,534] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,534] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-22 18:32:59,558] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,560] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-22 18:32:59,562] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-22 18:32:59,565] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-22 18:32:59,567] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,567] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-22 18:32:59,576] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-22 18:32:59,577] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,580] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(gbFTgedjTv-3MPkDYlSw2g),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:59,580] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-22 18:32:59,580] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,580] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,580] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,580] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,580] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:32:59,591] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-22 18:32:59,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,598] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-22 18:32:59,612] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:32:59,614] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-22 18:32:59,616] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-22 18:32:59,616] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-22 18:32:59,710] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:32:59,722] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-22 18:32:59,723] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,724] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,725] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(WIkwDHeHSyepRFt-FUY17w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:32:59,753] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-22 18:32:59,759] INFO [Broker id=1] Finished LeaderAndIsr request in 186ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-22 18:32:59,764] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=WIkwDHeHSyepRFt-FUY17w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-22 18:32:59,770] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-22 18:32:59,771] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-22 18:32:59,775] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-22 18:32:59,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,796] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,797] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,798] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:32:59,799] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-22 18:32:59,799] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-22 18:32:59,799] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-22 18:32:59,799] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-22 18:32:59,799] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-22 18:32:59,799] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-22 18:32:59,800] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-22 18:32:59,800] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-22 18:32:59,800] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-22 18:32:59,800] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-22 18:32:59,800] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-22 18:32:59,800] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-22 18:32:59,800] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-22 18:32:59,800] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-22 18:32:59,801] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-22 18:32:59,801] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-22 18:32:59,801] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-22 18:32:59,801] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-22 18:32:59,801] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-22 18:32:59,801] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-22 18:32:59,801] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-22 18:32:59,801] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-22 18:32:59,802] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-22 18:32:59,802] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-22 18:32:59,802] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-22 18:32:59,802] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-22 18:32:59,802] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-22 18:32:59,802] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-22 18:32:59,802] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-22 18:32:59,802] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-22 18:32:59,803] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-22 18:32:59,803] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-22 18:32:59,803] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-22 18:32:59,803] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-22 18:32:59,803] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-22 18:32:59,803] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-22 18:32:59,803] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-22 18:32:59,803] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-22 18:32:59,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-22 18:32:59,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-22 18:32:59,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-22 18:32:59,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-22 18:32:59,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-22 18:32:59,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-22 18:32:59,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-22 18:32:59,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-22 18:32:59,805] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-22 18:32:59,805] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-22 18:32:59,805] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-22 18:32:59,805] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-22 18:32:59,805] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-22 18:32:59,806] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) kafka | [2025-06-22 18:32:59,811] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,811] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,811] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,811] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,811] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,811] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,811] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,812] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,812] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,812] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,812] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,812] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,812] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,812] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,815] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2025-06-22 18:32:59,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,816] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,816] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,816] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,816] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,816] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,816] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,816] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,817] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,818] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,819] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,820] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,821] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,821] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,821] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,821] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,821] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:32:59,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:32:59,821] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-22 18:32:59,852] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-22 18:32:59,853] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-22 18:32:59,859] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-22 18:32:59,860] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) kafka | [2025-06-22 18:32:59,869] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:32:59,875] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:32:59,876] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,876] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,877] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:32:59,888] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:32:59,889] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:32:59,889] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,889] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,889] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:32:59,900] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:32:59,901] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:32:59,901] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,901] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,901] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:32:59,908] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:32:59,909] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:32:59,909] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,909] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,909] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:32:59,917] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:32:59,919] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:32:59,919] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,919] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,919] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:32:59,931] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:32:59,934] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:32:59,935] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,935] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,935] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:32:59,943] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:32:59,944] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:32:59,944] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,944] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,944] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:32:59,951] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:32:59,952] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:32:59,952] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,952] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,952] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:32:59,966] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:32:59,967] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:32:59,967] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,967] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,967] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:32:59,979] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:32:59,980] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:32:59,980] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,980] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,980] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:32:59,988] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:32:59,989] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:32:59,989] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,989] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:32:59,989] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:32:59,999] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,000] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,000] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,001] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,001] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,010] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,011] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,011] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,011] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,012] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,024] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,025] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,026] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,026] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,026] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,035] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,036] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,036] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,036] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,036] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,046] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,047] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,047] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,047] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,047] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,056] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,059] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,060] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,060] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,060] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,071] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,072] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,072] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,072] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,073] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,084] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,085] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,085] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,085] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,085] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,094] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,096] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,097] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,097] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,097] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,105] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,106] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,106] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,106] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,106] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,115] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,116] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,116] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,117] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,117] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,126] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,126] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,126] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,126] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,126] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,138] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,140] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,140] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,140] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,140] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,150] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,151] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,151] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,151] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,151] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,156] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,157] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,157] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,157] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,157] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,167] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,168] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,168] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,168] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,169] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,175] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,176] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,176] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,176] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,176] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,181] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,182] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,182] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,182] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,182] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,187] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,188] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,188] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,188] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,188] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,194] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,195] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,195] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,195] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,195] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,201] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,202] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,202] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,202] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,202] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,211] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,213] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,213] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,213] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,213] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,221] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,222] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,222] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,223] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,223] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,233] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,234] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,234] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,235] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,235] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,243] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,244] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,244] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,244] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,244] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,255] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,256] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,256] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,256] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,256] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,263] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,264] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,264] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,264] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,265] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,273] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,274] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,274] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,274] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,275] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,283] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,284] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,284] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,284] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,285] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,293] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,294] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,294] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,295] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,295] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,307] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,308] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,310] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,310] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,310] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,322] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,323] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,323] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,323] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,324] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,332] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,333] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,333] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,333] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,334] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,344] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,345] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,345] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,345] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,345] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,355] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,356] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,356] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,356] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,357] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,369] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,370] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,371] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,371] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,371] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,379] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,380] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,380] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,380] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,380] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,388] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,389] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,389] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,389] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,390] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,400] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:00,402] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-22 18:33:00,402] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,402] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:00,402] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(gbFTgedjTv-3MPkDYlSw2g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:00,407] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-22 18:33:00,407] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-22 18:33:00,407] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-22 18:33:00,408] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-22 18:33:00,409] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-22 18:33:00,410] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-22 18:33:00,411] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-22 18:33:00,411] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-22 18:33:00,413] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,415] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,417] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,417] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,417] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,417] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,417] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,417] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,417] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,417] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,417] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,417] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,418] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,418] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,418] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,418] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,418] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,418] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,418] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,418] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,418] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,418] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,418] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,419] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,419] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,419] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,419] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,419] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,419] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,419] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,419] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,419] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,419] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,419] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,420] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,420] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,420] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,420] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,420] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,420] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,420] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,420] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,420] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,420] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,420] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,421] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,421] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,421] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,421] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,421] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,421] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,421] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,421] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,421] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,421] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,421] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,421] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,421] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,422] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,422] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,422] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,422] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,422] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,422] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,422] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,422] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,422] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,422] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,422] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,423] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,423] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,423] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,423] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,423] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,423] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,423] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,423] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,423] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,423] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,423] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,424] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,424] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,424] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,424] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,424] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,424] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,424] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,424] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,424] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,425] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,425] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,425] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,425] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,425] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,425] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,425] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,425] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,425] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,425] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,425] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,425] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,425] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,425] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,425] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,425] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,425] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,425] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:00,425] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,426] INFO [Broker id=1] Finished LeaderAndIsr request in 611ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2025-06-22 18:33:00,425] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,427] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,427] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,428] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,428] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,428] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,428] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,428] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,428] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,428] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,428] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,428] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,429] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,429] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,429] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,429] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,429] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,429] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,429] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,429] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,429] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,429] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,430] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,430] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=gbFTgedjTv-3MPkDYlSw2g, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-22 18:33:00,430] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,430] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,430] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,430] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,430] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,430] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,430] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,430] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,430] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,431] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,431] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,431] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,431] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,431] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,431] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,431] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,431] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,431] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,434] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 9 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-22 18:33:00,434] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,435] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-22 18:33:00,439] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-22 18:33:01,195] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group b32030d5-804a-4841-8170-dff4e89c9a0b in Empty state. Created a new member id consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3-e96f48f0-7209-4ff3-90c9-b9f68619e50a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:01,210] INFO [GroupCoordinator 1]: Preparing to rebalance group b32030d5-804a-4841-8170-dff4e89c9a0b in state PreparingRebalance with old generation 0 (__consumer_offsets-16) (reason: Adding new member consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3-e96f48f0-7209-4ff3-90c9-b9f68619e50a with group instance id None; client reason: need to re-join with the given member-id: consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3-e96f48f0-7209-4ff3-90c9-b9f68619e50a) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:01,286] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-5bb8cf59-2b7c-4c48-a4bf-928cfd3aa390 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:01,288] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-5bb8cf59-2b7c-4c48-a4bf-928cfd3aa390 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-5bb8cf59-2b7c-4c48-a4bf-928cfd3aa390) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:01,362] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group b8801a63-a73c-402f-884b-eb1f60245931 in Empty state. Created a new member id consumer-b8801a63-a73c-402f-884b-eb1f60245931-2-b7b144a5-4bd6-44fd-9d18-4391bb31d3ad and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:01,371] INFO [GroupCoordinator 1]: Preparing to rebalance group b8801a63-a73c-402f-884b-eb1f60245931 in state PreparingRebalance with old generation 0 (__consumer_offsets-25) (reason: Adding new member consumer-b8801a63-a73c-402f-884b-eb1f60245931-2-b7b144a5-4bd6-44fd-9d18-4391bb31d3ad with group instance id None; client reason: need to re-join with the given member-id: consumer-b8801a63-a73c-402f-884b-eb1f60245931-2-b7b144a5-4bd6-44fd-9d18-4391bb31d3ad) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:04,224] INFO [GroupCoordinator 1]: Stabilized group b32030d5-804a-4841-8170-dff4e89c9a0b generation 1 (__consumer_offsets-16) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:04,254] INFO [GroupCoordinator 1]: Assignment received from leader consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3-e96f48f0-7209-4ff3-90c9-b9f68619e50a for group b32030d5-804a-4841-8170-dff4e89c9a0b for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:04,290] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:04,298] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-5bb8cf59-2b7c-4c48-a4bf-928cfd3aa390 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:04,373] INFO [GroupCoordinator 1]: Stabilized group b8801a63-a73c-402f-884b-eb1f60245931 generation 1 (__consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:04,389] INFO [GroupCoordinator 1]: Assignment received from leader consumer-b8801a63-a73c-402f-884b-eb1f60245931-2-b7b144a5-4bd6-44fd-9d18-4391bb31d3ad for group b8801a63-a73c-402f-884b-eb1f60245931 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:33:06,522] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-22 18:33:06,538] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(D3RsOYDlQmS98cTdHgnyHw),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-22 18:33:06,538] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) kafka | [2025-06-22 18:33:06,538] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-22 18:33:06,538] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-22 18:33:06,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-22 18:33:06,539] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-22 18:33:06,552] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-22 18:33:06,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) kafka | [2025-06-22 18:33:06,552] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-22 18:33:06,553] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-22 18:33:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-22 18:33:06,553] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-22 18:33:06,554] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-22 18:33:06,554] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 5 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-22 18:33:06,555] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-22 18:33:06,555] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-22 18:33:06,555] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 5 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-22 18:33:06,559] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-22 18:33:06,560] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-22 18:33:06,561] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:06,561] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-22 18:33:06,561] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(D3RsOYDlQmS98cTdHgnyHw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-22 18:33:06,566] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 5 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-22 18:33:06,567] INFO [Broker id=1] Finished LeaderAndIsr request in 13ms correlationId 5 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-22 18:33:06,568] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=D3RsOYDlQmS98cTdHgnyHw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 5 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-22 18:33:06,569] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) kafka | [2025-06-22 18:33:06,569] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) kafka | [2025-06-22 18:33:06,571] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 6 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-22 18:34:36,791] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-7d7f14e7-d46e-46b2-9f8d-87f54a95c3d4 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:34:36,793] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-7d7f14e7-d46e-46b2-9f8d-87f54a95c3d4 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:34:39,794] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:34:39,797] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-7d7f14e7-d46e-46b2-9f8d-87f54a95c3d4 for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:34:39,923] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-7d7f14e7-d46e-46b2-9f8d-87f54a95c3d4 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:34:39,925] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-22 18:34:39,927] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-7d7f14e7-d46e-46b2-9f8d-87f54a95c3d4, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-22T18:32:37.247+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-22T18:32:37.317+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 37 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-22T18:32:37.318+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-22T18:32:38.865+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-22T18:32:39.034+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 158 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-22T18:32:39.722+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-22T18:32:39.735+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-22T18:32:39.739+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-22T18:32:39.739+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-22T18:32:39.778+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-22T18:32:39.778+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2399 ms policy-api | [2025-06-22T18:32:40.116+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-22T18:32:40.202+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-22T18:32:40.252+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-22T18:32:40.660+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-22T18:32:40.703+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-22T18:32:40.922+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@59aa1d1c policy-api | [2025-06-22T18:32:40.925+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-22T18:32:41.015+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-22T18:32:43.078+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-22T18:32:43.082+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-22T18:32:43.814+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-22T18:32:44.804+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-22T18:32:45.946+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-22T18:32:45.992+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-22T18:32:46.607+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-22T18:32:46.756+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-22T18:32:46.777+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-22T18:32:46.802+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.324 seconds (process running for 10.854) policy-api | [2025-06-22T18:33:39.924+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-22T18:33:39.924+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-22T18:33:39.926+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-api | [2025-06-22T18:34:12.199+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: policy-api | [] policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | MakeTopics :: Creates the Policy topics | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ExecuteXacmlPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | policy-csit | 4 tests, 4 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | policy-csit | 2 tests, 2 passed, 0 failed policy-csit | ============================================================================== policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | policy-csit | 6 tests, 6 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.4) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:24.526773 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:24.57221 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:24.624097 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:24.675313 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:24.728179 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:24.791973 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:24.84565 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:24.888548 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:24.936938 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:24.987817 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.036164 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.094171 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.142789 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.189561 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.248681 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.29238 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.346666 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.402535 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.455748 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.507906 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.549995 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.596755 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.66079 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.704189 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.750474 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.812835 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.862807 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.916696 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:25.965256 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.014232 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.067876 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.121103 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.172235 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.232648 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.285988 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.349219 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.394636 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.445204 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.496293 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.560592 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.615804 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.674507 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.728005 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.789459 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.845379 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.893944 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:26.97054 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.02617 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.090278 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.141581 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.201475 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.264142 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.320439 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.382581 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.432658 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.483811 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.547703 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.59739 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.660945 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.715016 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.765228 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.833389 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.889265 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:27.957387 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.012059 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.06133 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.123189 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.178746 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.257433 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.311598 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.362353 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.442338 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.492475 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.5558 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.612735 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.662826 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.73231 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.785873 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.848653 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.896349 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:28.949485 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.019843 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.07064 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.139811 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.191715 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.241471 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.310434 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.356008 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.407534 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.464463 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.513895 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.563876 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.618621 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.667313 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.7136 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 2206251832240800u | 1 | 2025-06-22 18:32:29.763622 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 2206251832240900u | 1 | 2025-06-22 18:32:29.80915 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 2206251832240900u | 1 | 2025-06-22 18:32:29.861066 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 2206251832240900u | 1 | 2025-06-22 18:32:29.921102 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 2206251832240900u | 1 | 2025-06-22 18:32:29.968908 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 2206251832240900u | 1 | 2025-06-22 18:32:30.037085 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 2206251832240900u | 1 | 2025-06-22 18:32:30.088595 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 2206251832240900u | 1 | 2025-06-22 18:32:30.136268 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 2206251832240900u | 1 | 2025-06-22 18:32:30.201257 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 2206251832240900u | 1 | 2025-06-22 18:32:30.248972 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 2206251832240900u | 1 | 2025-06-22 18:32:30.321031 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 2206251832240900u | 1 | 2025-06-22 18:32:30.372832 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 2206251832240900u | 1 | 2025-06-22 18:32:30.4257 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 2206251832240900u | 1 | 2025-06-22 18:32:30.487609 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 2206251832241000u | 1 | 2025-06-22 18:32:30.539316 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 2206251832241000u | 1 | 2025-06-22 18:32:30.601108 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 2206251832241000u | 1 | 2025-06-22 18:32:30.653253 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 2206251832241000u | 1 | 2025-06-22 18:32:30.713744 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 2206251832241000u | 1 | 2025-06-22 18:32:30.767789 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 2206251832241000u | 1 | 2025-06-22 18:32:30.817934 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 2206251832241000u | 1 | 2025-06-22 18:32:30.886236 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 2206251832241000u | 1 | 2025-06-22 18:32:30.947122 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 2206251832241000u | 1 | 2025-06-22 18:32:30.991114 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 2206251832241100u | 1 | 2025-06-22 18:32:31.053186 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 2206251832241200u | 1 | 2025-06-22 18:32:31.103843 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 2206251832241200u | 1 | 2025-06-22 18:32:31.183431 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 2206251832241200u | 1 | 2025-06-22 18:32:31.240773 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 2206251832241200u | 1 | 2025-06-22 18:32:31.294587 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 2206251832241300u | 1 | 2025-06-22 18:32:31.350071 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 2206251832241300u | 1 | 2025-06-22 18:32:31.396838 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 2206251832241300u | 1 | 2025-06-22 18:32:31.444646 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 2206251832321400u | 1 | 2025-06-22 18:32:32.11675 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 2206251832321400u | 1 | 2025-06-22 18:32:32.205316 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 2206251832321400u | 1 | 2025-06-22 18:32:32.26594 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 2206251832321400u | 1 | 2025-06-22 18:32:32.319043 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 2206251832321400u | 1 | 2025-06-22 18:32:32.387496 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 2206251832321400u | 1 | 2025-06-22 18:32:32.441244 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 2206251832321400u | 1 | 2025-06-22 18:32:32.489561 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 2206251832321400u | 1 | 2025-06-22 18:32:32.537715 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 2206251832321400u | 1 | 2025-06-22 18:32:32.589537 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 2206251832321400u | 1 | 2025-06-22 18:32:32.641789 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 2206251832321400u | 1 | 2025-06-22 18:32:32.687266 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 2206251832321400u | 1 | 2025-06-22 18:32:32.742082 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 2206251832321400u | 1 | 2025-06-22 18:32:32.822151 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 2206251832321500u | 1 | 2025-06-22 18:32:32.876381 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 2206251832321500u | 1 | 2025-06-22 18:32:32.939632 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 2206251832321500u | 1 | 2025-06-22 18:32:32.994556 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 2206251832321500u | 1 | 2025-06-22 18:32:33.037672 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 2206251832321500u | 1 | 2025-06-22 18:32:33.086917 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 2206251832321500u | 1 | 2025-06-22 18:32:33.139531 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 2206251832321500u | 1 | 2025-06-22 18:32:33.188126 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 2206251832321500u | 1 | 2025-06-22 18:32:33.244822 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 2206251832321600u | 1 | 2025-06-22 18:32:33.288753 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 2206251832321600u | 1 | 2025-06-22 18:32:33.334604 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 2206251832321601u | 1 | 2025-06-22 18:32:33.398188 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 2206251832321601u | 1 | 2025-06-22 18:32:33.445489 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 2206251832321700u | 1 | 2025-06-22 18:32:33.508286 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 2206251832321700u | 1 | 2025-06-22 18:32:33.566927 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 2206251832321700u | 1 | 2025-06-22 18:32:33.614372 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 2206251832321701u | 1 | 2025-06-22 18:32:33.678139 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 2206251832321701u | 1 | 2025-06-22 18:32:33.727275 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 2206251832321701u | 1 | 2025-06-22 18:32:33.796602 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 2206251832321701u | 1 | 2025-06-22 18:32:33.847436 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 2206251832321701u | 1 | 2025-06-22 18:32:33.89634 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 2206251832321701u | 1 | 2025-06-22 18:32:33.950074 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 2206251832321701u | 1 | 2025-06-22 18:32:33.998575 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 2206251832321701u | 1 | 2025-06-22 18:32:34.039027 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 2206251832321701u | 1 | 2025-06-22 18:32:34.087798 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 2206251832341600u | 1 | 2025-06-22 18:32:34.693791 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 2206251832351600u | 1 | 2025-06-22 18:32:35.294308 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 2206251832351600u | 1 | 2025-06-22 18:32:35.377421 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.7:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.5:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-22T18:32:49.136+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 59 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-22T18:32:49.138+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-22T18:32:50.629+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-22T18:32:50.725+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 83 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-22T18:32:51.832+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-22T18:32:51.846+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-22T18:32:51.848+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-22T18:32:51.848+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-22T18:32:51.909+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-22T18:32:51.909+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2707 ms policy-pap | [2025-06-22T18:32:52.372+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-22T18:32:52.466+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-22T18:32:52.516+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-22T18:32:52.958+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-22T18:32:53.010+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-22T18:32:53.254+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6e337ba1 policy-pap | [2025-06-22T18:32:53.256+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-22T18:32:53.348+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-22T18:32:55.400+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-22T18:32:55.404+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-22T18:32:56.758+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = b32030d5-804a-4841-8170-dff4e89c9a0b policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-22T18:32:56.829+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-22T18:32:56.995+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-22T18:32:56.995+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-22T18:32:56.995+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750617176994 policy-pap | [2025-06-22T18:32:56.998+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-1, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-22T18:32:56.999+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-22T18:32:56.999+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-22T18:32:57.008+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-22T18:32:57.008+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-22T18:32:57.008+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750617177008 policy-pap | [2025-06-22T18:32:57.008+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-22T18:32:57.451+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=xacml, supportedPolicyTypes=[onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0, onap.policies.monitoring.* 1.0.0, onap.policies.optimization.* 1.0.0, onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0, onap.policies.native.Xacml 1.0.0, onap.policies.Naming 1.0.0, onap.policies.match.* 1.0.0], policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-22T18:32:57.597+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-22T18:32:57.693+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-22T18:32:57.921+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-22T18:32:58.704+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-22T18:32:58.813+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-22T18:32:58.839+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-22T18:32:58.858+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-22T18:32:58.858+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-22T18:32:58.859+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-22T18:32:58.860+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-22T18:32:58.860+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-22T18:32:58.860+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-22T18:32:58.860+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-22T18:32:58.862+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b32030d5-804a-4841-8170-dff4e89c9a0b, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@438cb294 policy-pap | [2025-06-22T18:32:58.873+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b32030d5-804a-4841-8170-dff4e89c9a0b, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-22T18:32:58.873+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = b32030d5-804a-4841-8170-dff4e89c9a0b policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-22T18:32:58.874+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-22T18:32:58.881+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-22T18:32:58.881+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-22T18:32:58.881+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750617178880 policy-pap | [2025-06-22T18:32:58.881+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-22T18:32:58.882+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-22T18:32:58.882+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=e8a8c9bd-db80-44f7-9274-d2d42435e676, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@7b787996 policy-pap | [2025-06-22T18:32:58.882+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=e8a8c9bd-db80-44f7-9274-d2d42435e676, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-22T18:32:58.882+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-22T18:32:58.883+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-22T18:32:58.888+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-22T18:32:58.888+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-22T18:32:58.888+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750617178888 policy-pap | [2025-06-22T18:32:58.888+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-22T18:32:58.888+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-22T18:32:58.889+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=e8a8c9bd-db80-44f7-9274-d2d42435e676, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-22T18:32:58.889+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b32030d5-804a-4841-8170-dff4e89c9a0b, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-22T18:32:58.889+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=82d293b4-5534-4923-9bb7-8a7f43f51c3e, alive=false, publisher=null]]: starting policy-pap | [2025-06-22T18:32:58.903+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-22T18:32:58.904+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-22T18:32:58.918+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-22T18:32:58.937+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-22T18:32:58.937+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-22T18:32:58.937+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750617178937 policy-pap | [2025-06-22T18:32:58.938+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=82d293b4-5534-4923-9bb7-8a7f43f51c3e, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-22T18:32:58.938+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=edcd7d16-e971-462d-9b83-7fd0a81bbec9, alive=false, publisher=null]]: starting policy-pap | [2025-06-22T18:32:58.939+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-22T18:32:58.939+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-22T18:32:58.940+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-22T18:32:58.947+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-22T18:32:58.947+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-22T18:32:58.947+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750617178947 policy-pap | [2025-06-22T18:32:58.947+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=edcd7d16-e971-462d-9b83-7fd0a81bbec9, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-22T18:32:58.948+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-22T18:32:58.948+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-22T18:32:58.950+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-22T18:32:58.950+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-22T18:32:58.952+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-22T18:32:58.953+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-22T18:32:58.953+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-22T18:32:58.954+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-22T18:32:58.954+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-22T18:32:58.955+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-22T18:32:58.958+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-22T18:32:58.959+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.665 seconds (process running for 11.31) policy-pap | [2025-06-22T18:32:59.449+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: Y94SwhAjTxOcMpy5L2vWew policy-pap | [2025-06-22T18:32:59.451+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: Y94SwhAjTxOcMpy5L2vWew policy-pap | [2025-06-22T18:32:59.454+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-22T18:32:59.455+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] Cluster ID: Y94SwhAjTxOcMpy5L2vWew policy-pap | [2025-06-22T18:32:59.503+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-22T18:32:59.504+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-22T18:32:59.511+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-22T18:32:59.511+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: Y94SwhAjTxOcMpy5L2vWew policy-pap | [2025-06-22T18:32:59.647+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-22T18:32:59.649+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-22T18:33:01.165+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-22T18:33:01.172+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] (Re-)joining group policy-pap | [2025-06-22T18:33:01.202+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] Request joining group due to: need to re-join with the given member-id: consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3-e96f48f0-7209-4ff3-90c9-b9f68619e50a policy-pap | [2025-06-22T18:33:01.202+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] (Re-)joining group policy-pap | [2025-06-22T18:33:01.281+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-22T18:33:01.283+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-22T18:33:01.286+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-5bb8cf59-2b7c-4c48-a4bf-928cfd3aa390 policy-pap | [2025-06-22T18:33:01.286+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-22T18:33:04.231+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] Successfully joined group with generation Generation{generationId=1, memberId='consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3-e96f48f0-7209-4ff3-90c9-b9f68619e50a', protocol='range'} policy-pap | [2025-06-22T18:33:04.241+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] Finished assignment for group at generation 1: {consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3-e96f48f0-7209-4ff3-90c9-b9f68619e50a=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-22T18:33:04.270+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] Successfully synced group in generation Generation{generationId=1, memberId='consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3-e96f48f0-7209-4ff3-90c9-b9f68619e50a', protocol='range'} policy-pap | [2025-06-22T18:33:04.271+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-22T18:33:04.277+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-22T18:33:04.293+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-5bb8cf59-2b7c-4c48-a4bf-928cfd3aa390', protocol='range'} policy-pap | [2025-06-22T18:33:04.293+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-5bb8cf59-2b7c-4c48-a4bf-928cfd3aa390=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-22T18:33:04.297+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-22T18:33:04.301+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-5bb8cf59-2b7c-4c48-a4bf-928cfd3aa390', protocol='range'} policy-pap | [2025-06-22T18:33:04.302+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-22T18:33:04.302+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-22T18:33:04.304+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-22T18:33:04.314+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-22T18:33:04.314+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b32030d5-804a-4841-8170-dff4e89c9a0b-3, groupId=b32030d5-804a-4841-8170-dff4e89c9a0b] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-22T18:33:05.472+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-22T18:33:05.473+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"4ec87877-bd1c-4716-8bea-742c4412a581","timestampMs":1750617180859,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf"} policy-pap | [2025-06-22T18:33:05.477+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"4ec87877-bd1c-4716-8bea-742c4412a581","timestampMs":1750617180859,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf"} policy-pap | [2025-06-22T18:33:05.478+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK policy-pap | [2025-06-22T18:33:05.478+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_TOPIC_CHECK policy-pap | [2025-06-22T18:33:05.518+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"bff4a99a-a6ed-43b1-832c-5d5e76e24243","timestampMs":1750617185485,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup"} policy-pap | [2025-06-22T18:33:05.524+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"bff4a99a-a6ed-43b1-832c-5d5e76e24243","timestampMs":1750617185485,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup"} policy-pap | [2025-06-22T18:33:05.527+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-22T18:33:06.273+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting policy-pap | [2025-06-22T18:33:06.273+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting listener policy-pap | [2025-06-22T18:33:06.273+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting timer policy-pap | [2025-06-22T18:33:06.273+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=fd4b127c-b2f1-4302-9032-f1af07ff361e, expireMs=1750617216273] policy-pap | [2025-06-22T18:33:06.275+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting enqueue policy-pap | [2025-06-22T18:33:06.275+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=fd4b127c-b2f1-4302-9032-f1af07ff361e, expireMs=1750617216273] policy-pap | [2025-06-22T18:33:06.275+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate started policy-pap | [2025-06-22T18:33:06.282+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"fd4b127c-b2f1-4302-9032-f1af07ff361e","timestampMs":1750617186247,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:06.329+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"fd4b127c-b2f1-4302-9032-f1af07ff361e","timestampMs":1750617186247,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:06.330+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-22T18:33:06.336+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"fd4b127c-b2f1-4302-9032-f1af07ff361e","timestampMs":1750617186247,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:06.337+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-22T18:33:06.465+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"fd4b127c-b2f1-4302-9032-f1af07ff361e","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"cadc5171-7edc-496f-9e94-5b0f94c3718e","timestampMs":1750617186439,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:06.466+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"fd4b127c-b2f1-4302-9032-f1af07ff361e","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"cadc5171-7edc-496f-9e94-5b0f94c3718e","timestampMs":1750617186439,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:06.466+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping policy-pap | [2025-06-22T18:33:06.466+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping enqueue policy-pap | [2025-06-22T18:33:06.467+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping timer policy-pap | [2025-06-22T18:33:06.467+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=fd4b127c-b2f1-4302-9032-f1af07ff361e, expireMs=1750617216273] policy-pap | [2025-06-22T18:33:06.467+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping listener policy-pap | [2025-06-22T18:33:06.467+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id fd4b127c-b2f1-4302-9032-f1af07ff361e policy-pap | [2025-06-22T18:33:06.467+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopped policy-pap | [2025-06-22T18:33:06.479+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"a10f86eb-d0b2-4e4b-b937-f4cb2f8608b5","timestampMs":1750617186451,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:06.504+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate successful policy-pap | [2025-06-22T18:33:06.504+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf start publishing next request policy-pap | [2025-06-22T18:33:06.504+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpStateChange starting policy-pap | [2025-06-22T18:33:06.504+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpStateChange starting listener policy-pap | [2025-06-22T18:33:06.504+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpStateChange starting timer policy-pap | [2025-06-22T18:33:06.504+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=0bc34fd7-53f7-412c-9f44-d9b5724f38f6, expireMs=1750617216504] policy-pap | [2025-06-22T18:33:06.504+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpStateChange starting enqueue policy-pap | [2025-06-22T18:33:06.504+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpStateChange started policy-pap | [2025-06-22T18:33:06.504+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=0bc34fd7-53f7-412c-9f44-d9b5724f38f6, expireMs=1750617216504] policy-pap | [2025-06-22T18:33:06.505+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"0bc34fd7-53f7-412c-9f44-d9b5724f38f6","timestampMs":1750617186248,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:06.505+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.Naming","policy-type-version":"1.0.0","policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-22T18:33:06.535+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-22T18:33:06.935+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"a10f86eb-d0b2-4e4b-b937-f4cb2f8608b5","timestampMs":1750617186451,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:06.935+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-22T18:33:06.938+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"0bc34fd7-53f7-412c-9f44-d9b5724f38f6","timestampMs":1750617186248,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:06.939+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-22T18:33:06.939+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"0bc34fd7-53f7-412c-9f44-d9b5724f38f6","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"284e7ddc-d7b5-43f1-9956-e81fa8159eda","timestampMs":1750617186527,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:07.216+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpStateChange stopping policy-pap | [2025-06-22T18:33:07.216+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpStateChange stopping enqueue policy-pap | [2025-06-22T18:33:07.216+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpStateChange stopping timer policy-pap | [2025-06-22T18:33:07.217+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=0bc34fd7-53f7-412c-9f44-d9b5724f38f6, expireMs=1750617216504] policy-pap | [2025-06-22T18:33:07.217+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpStateChange stopping listener policy-pap | [2025-06-22T18:33:07.217+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpStateChange stopped policy-pap | [2025-06-22T18:33:07.217+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpStateChange successful policy-pap | [2025-06-22T18:33:07.217+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf start publishing next request policy-pap | [2025-06-22T18:33:07.217+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting policy-pap | [2025-06-22T18:33:07.217+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting listener policy-pap | [2025-06-22T18:33:07.217+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting timer policy-pap | [2025-06-22T18:33:07.217+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=32a75b59-731e-4b21-82e8-7343ad06fba0, expireMs=1750617217217] policy-pap | [2025-06-22T18:33:07.217+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting enqueue policy-pap | [2025-06-22T18:33:07.218+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"32a75b59-731e-4b21-82e8-7343ad06fba0","timestampMs":1750617186925,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:07.217+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate started policy-pap | [2025-06-22T18:33:07.223+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"0bc34fd7-53f7-412c-9f44-d9b5724f38f6","timestampMs":1750617186248,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:07.224+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-22T18:33:07.231+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"0bc34fd7-53f7-412c-9f44-d9b5724f38f6","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"284e7ddc-d7b5-43f1-9956-e81fa8159eda","timestampMs":1750617186527,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:07.232+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 0bc34fd7-53f7-412c-9f44-d9b5724f38f6 policy-pap | [2025-06-22T18:33:07.242+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"32a75b59-731e-4b21-82e8-7343ad06fba0","timestampMs":1750617186925,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:07.242+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-22T18:33:07.245+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"32a75b59-731e-4b21-82e8-7343ad06fba0","timestampMs":1750617186925,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:07.246+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-22T18:33:07.251+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"32a75b59-731e-4b21-82e8-7343ad06fba0","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"624600b7-5634-4b28-8329-f7013fb0fd01","timestampMs":1750617187233,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:07.252+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping policy-pap | [2025-06-22T18:33:07.252+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping enqueue policy-pap | [2025-06-22T18:33:07.252+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping timer policy-pap | [2025-06-22T18:33:07.252+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=32a75b59-731e-4b21-82e8-7343ad06fba0, expireMs=1750617217217] policy-pap | [2025-06-22T18:33:07.252+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping listener policy-pap | [2025-06-22T18:33:07.252+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopped policy-pap | [2025-06-22T18:33:07.253+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"32a75b59-731e-4b21-82e8-7343ad06fba0","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"624600b7-5634-4b28-8329-f7013fb0fd01","timestampMs":1750617187233,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:33:07.254+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 32a75b59-731e-4b21-82e8-7343ad06fba0 policy-pap | [2025-06-22T18:33:07.258+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate successful policy-pap | [2025-06-22T18:33:07.258+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf has no more requests policy-pap | [2025-06-22T18:33:36.274+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=fd4b127c-b2f1-4302-9032-f1af07ff361e, expireMs=1750617216273] policy-pap | [2025-06-22T18:33:36.505+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=0bc34fd7-53f7-412c-9f44-d9b5724f38f6, expireMs=1750617216504] policy-pap | [2025-06-22T18:33:41.623+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-22T18:33:41.624+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-22T18:33:41.626+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-pap | [2025-06-22T18:34:15.620+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group defaultGroup policy-pap | [2025-06-22T18:34:15.621+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy onap.restart.tca 1.0.0 to subgroup defaultGroup xacml count=2 policy-pap | [2025-06-22T18:34:15.621+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-22T18:34:15.622+00:00|INFO|SessionData|http-nio-6969-exec-3] add update xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf defaultGroup xacml policies=1 policy-pap | [2025-06-22T18:34:15.623+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group defaultGroup policy-pap | [2025-06-22T18:34:15.671+00:00|INFO|SessionData|http-nio-6969-exec-3] use cached group defaultGroup policy-pap | [2025-06-22T18:34:15.671+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy OSDF_CASABLANCA.Affinity_Default 1.0.0 to subgroup defaultGroup xacml count=3 policy-pap | [2025-06-22T18:34:15.671+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy OSDF_CASABLANCA.Affinity_Default 1.0.0 policy-pap | [2025-06-22T18:34:15.671+00:00|INFO|SessionData|http-nio-6969-exec-3] add update xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf defaultGroup xacml policies=2 policy-pap | [2025-06-22T18:34:15.671+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group defaultGroup policy-pap | [2025-06-22T18:34:15.672+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group defaultGroup policy-pap | [2025-06-22T18:34:15.693+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-22T18:34:15Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=OSDF_CASABLANCA.Affinity_Default 1.0.0, action=DEPLOYMENT, timestamp=2025-06-22T18:34:15Z, user=policyadmin)] policy-pap | [2025-06-22T18:34:15.738+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting policy-pap | [2025-06-22T18:34:15.738+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting listener policy-pap | [2025-06-22T18:34:15.738+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting timer policy-pap | [2025-06-22T18:34:15.738+00:00|INFO|TimerManager|http-nio-6969-exec-3] update timer registered Timer [name=106d6c6c-ee16-4b07-a6ff-30947cd454e4, expireMs=1750617285738] policy-pap | [2025-06-22T18:34:15.738+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=106d6c6c-ee16-4b07-a6ff-30947cd454e4, expireMs=1750617285738] policy-pap | [2025-06-22T18:34:15.738+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting enqueue policy-pap | [2025-06-22T18:34:15.738+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate started policy-pap | [2025-06-22T18:34:15.739+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"106d6c6c-ee16-4b07-a6ff-30947cd454e4","timestampMs":1750617255671,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:34:15.747+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"106d6c6c-ee16-4b07-a6ff-30947cd454e4","timestampMs":1750617255671,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:34:15.747+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-22T18:34:15.759+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"106d6c6c-ee16-4b07-a6ff-30947cd454e4","timestampMs":1750617255671,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:34:15.759+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-22T18:34:16.340+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"106d6c6c-ee16-4b07-a6ff-30947cd454e4","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"1862f0b6-e099-42b3-aa99-47fa0b0ba3dc","timestampMs":1750617256333,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:34:16.341+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 106d6c6c-ee16-4b07-a6ff-30947cd454e4 policy-pap | [2025-06-22T18:34:16.343+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"106d6c6c-ee16-4b07-a6ff-30947cd454e4","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"1862f0b6-e099-42b3-aa99-47fa0b0ba3dc","timestampMs":1750617256333,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:34:16.343+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping policy-pap | [2025-06-22T18:34:16.344+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping enqueue policy-pap | [2025-06-22T18:34:16.344+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping timer policy-pap | [2025-06-22T18:34:16.344+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=106d6c6c-ee16-4b07-a6ff-30947cd454e4, expireMs=1750617285738] policy-pap | [2025-06-22T18:34:16.344+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping listener policy-pap | [2025-06-22T18:34:16.344+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopped policy-pap | [2025-06-22T18:34:16.355+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate successful policy-pap | [2025-06-22T18:34:16.355+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf has no more requests policy-pap | [2025-06-22T18:34:16.356+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0},{"policy-type":"onap.policies.optimization.resource.AffinityPolicy","policy-type-version":"1.0.0","policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-22T18:34:40.475+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group defaultGroup policy-pap | [2025-06-22T18:34:40.477+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup defaultGroup xacml count=2 policy-pap | [2025-06-22T18:34:40.478+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2025-06-22T18:34:40.478+00:00|INFO|SessionData|http-nio-6969-exec-4] add update xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf defaultGroup xacml policies=0 policy-pap | [2025-06-22T18:34:40.478+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group defaultGroup policy-pap | [2025-06-22T18:34:40.479+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group defaultGroup policy-pap | [2025-06-22T18:34:40.500+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-22T18:34:40Z, user=policyadmin)] policy-pap | [2025-06-22T18:34:40.508+00:00|INFO|ServiceManager|http-nio-6969-exec-4] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting policy-pap | [2025-06-22T18:34:40.508+00:00|INFO|ServiceManager|http-nio-6969-exec-4] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting listener policy-pap | [2025-06-22T18:34:40.508+00:00|INFO|ServiceManager|http-nio-6969-exec-4] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting timer policy-pap | [2025-06-22T18:34:40.508+00:00|INFO|TimerManager|http-nio-6969-exec-4] update timer registered Timer [name=8b3804f0-7cd2-4401-bfcd-8ad64b80a908, expireMs=1750617310508] policy-pap | [2025-06-22T18:34:40.508+00:00|INFO|ServiceManager|http-nio-6969-exec-4] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate starting enqueue policy-pap | [2025-06-22T18:34:40.509+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"8b3804f0-7cd2-4401-bfcd-8ad64b80a908","timestampMs":1750617280478,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:34:40.510+00:00|INFO|ServiceManager|http-nio-6969-exec-4] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate started policy-pap | [2025-06-22T18:34:40.517+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"8b3804f0-7cd2-4401-bfcd-8ad64b80a908","timestampMs":1750617280478,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:34:40.517+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-22T18:34:40.522+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"8b3804f0-7cd2-4401-bfcd-8ad64b80a908","timestampMs":1750617280478,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:34:40.525+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-22T18:34:40.526+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"8b3804f0-7cd2-4401-bfcd-8ad64b80a908","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"09f01281-c47c-4f94-a9b2-2dbf68381bf3","timestampMs":1750617280521,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:34:40.526+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping policy-pap | [2025-06-22T18:34:40.526+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping enqueue policy-pap | [2025-06-22T18:34:40.526+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping timer policy-pap | [2025-06-22T18:34:40.527+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=8b3804f0-7cd2-4401-bfcd-8ad64b80a908, expireMs=1750617310508] policy-pap | [2025-06-22T18:34:40.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopping listener policy-pap | [2025-06-22T18:34:40.527+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate stopped policy-pap | [2025-06-22T18:34:40.529+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"8b3804f0-7cd2-4401-bfcd-8ad64b80a908","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"09f01281-c47c-4f94-a9b2-2dbf68381bf3","timestampMs":1750617280521,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:34:40.529+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8b3804f0-7cd2-4401-bfcd-8ad64b80a908 policy-pap | [2025-06-22T18:34:40.545+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf PdpUpdate successful policy-pap | [2025-06-22T18:34:40.545+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf has no more requests policy-pap | [2025-06-22T18:34:40.545+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-22T18:34:45.738+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=106d6c6c-ee16-4b07-a6ff-30947cd454e4, expireMs=1750617285738] policy-pap | [2025-06-22T18:34:58.955+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-22T18:35:06.494+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"acf694fd-0c8d-4f9b-a2af-9e78ff422ced","timestampMs":1750617306483,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:35:06.495+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"acf694fd-0c8d-4f9b-a2af-9e78ff422ced","timestampMs":1750617306483,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-pap | [2025-06-22T18:35:06.496+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-xacml-pdp | Waiting for pap port 6969... policy-xacml-pdp | pap (172.17.0.8:6969) open policy-xacml-pdp | Waiting for kafka port 9092... policy-xacml-pdp | kafka (172.17.0.5:9092) open policy-xacml-pdp | + KEYSTORE=/opt/app/policy/pdpx/etc/ssl/policy-keystore policy-xacml-pdp | + TRUSTSTORE=/opt/app/policy/pdpx/etc/ssl/policy-truststore policy-xacml-pdp | + KEYSTORE_PASSWD=Pol1cy_0nap policy-xacml-pdp | + TRUSTSTORE_PASSWD=Pol1cy_0nap policy-xacml-pdp | + '[' 0 -ge 1 ] policy-xacml-pdp | + CONFIG_FILE= policy-xacml-pdp | + '[' -z ] policy-xacml-pdp | + CONFIG_FILE=/opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-truststore ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-keystore ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/xacml.properties ] policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/logback.xml ] policy-xacml-pdp | Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | + echo 'Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json' policy-xacml-pdp | + /usr/lib/jvm/default-jvm/bin/java -cp '/opt/app/policy/pdpx/etc:/opt/app/policy/pdpx/lib/*' '-Dlogback.configurationFile=/opt/app/policy/pdpx/etc/logback.xml' '-Djavax.net.ssl.keyStore=/opt/app/policy/pdpx/etc/ssl/policy-keystore' '-Djavax.net.ssl.keyStorePassword=Pol1cy_0nap' '-Djavax.net.ssl.trustStore=/opt/app/policy/pdpx/etc/ssl/policy-truststore' '-Djavax.net.ssl.trustStorePassword=Pol1cy_0nap' org.onap.policy.pdpx.main.startstop.Main -c /opt/app/policy/pdpx/etc/defaultConfig.json policy-xacml-pdp | [2025-06-22T18:32:59.936+00:00|INFO|Main|main] Starting policy xacml pdp service with arguments - [-c, /opt/app/policy/pdpx/etc/defaultConfig.json] policy-xacml-pdp | [2025-06-22T18:33:00.069+00:00|INFO|XacmlPdpActivator|main] Activator initializing using org.onap.policy.pdpx.main.parameters.XacmlPdpParameterGroup@37858383 policy-xacml-pdp | [2025-06-22T18:33:00.121+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-xacml-pdp | allow.auto.create.topics = true policy-xacml-pdp | auto.commit.interval.ms = 5000 policy-xacml-pdp | auto.include.jmx.reporter = true policy-xacml-pdp | auto.offset.reset = latest policy-xacml-pdp | bootstrap.servers = [kafka:9092] policy-xacml-pdp | check.crcs = true policy-xacml-pdp | client.dns.lookup = use_all_dns_ips policy-xacml-pdp | client.id = consumer-b8801a63-a73c-402f-884b-eb1f60245931-1 policy-xacml-pdp | client.rack = policy-xacml-pdp | connections.max.idle.ms = 540000 policy-xacml-pdp | default.api.timeout.ms = 60000 policy-xacml-pdp | enable.auto.commit = true policy-xacml-pdp | enable.metrics.push = true policy-xacml-pdp | exclude.internal.topics = true policy-xacml-pdp | fetch.max.bytes = 52428800 policy-xacml-pdp | fetch.max.wait.ms = 500 policy-xacml-pdp | fetch.min.bytes = 1 policy-xacml-pdp | group.id = b8801a63-a73c-402f-884b-eb1f60245931 policy-xacml-pdp | group.instance.id = null policy-xacml-pdp | group.protocol = classic policy-xacml-pdp | group.remote.assignor = null policy-xacml-pdp | heartbeat.interval.ms = 3000 policy-xacml-pdp | interceptor.classes = [] policy-xacml-pdp | internal.leave.group.on.close = true policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-xacml-pdp | isolation.level = read_uncommitted policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | max.partition.fetch.bytes = 1048576 policy-xacml-pdp | max.poll.interval.ms = 300000 policy-xacml-pdp | max.poll.records = 500 policy-xacml-pdp | metadata.max.age.ms = 300000 policy-xacml-pdp | metadata.recovery.strategy = none policy-xacml-pdp | metric.reporters = [] policy-xacml-pdp | metrics.num.samples = 2 policy-xacml-pdp | metrics.recording.level = INFO policy-xacml-pdp | metrics.sample.window.ms = 30000 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-xacml-pdp | receive.buffer.bytes = 65536 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 policy-xacml-pdp | reconnect.backoff.ms = 50 policy-xacml-pdp | request.timeout.ms = 30000 policy-xacml-pdp | retry.backoff.max.ms = 1000 policy-xacml-pdp | retry.backoff.ms = 100 policy-xacml-pdp | sasl.client.callback.handler.class = null policy-xacml-pdp | sasl.jaas.config = null policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | sasl.kerberos.service.name = null policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-xacml-pdp | sasl.login.callback.handler.class = null policy-xacml-pdp | sasl.login.class = null policy-xacml-pdp | sasl.login.connect.timeout.ms = null policy-xacml-pdp | sasl.login.read.timeout.ms = null policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 policy-xacml-pdp | sasl.mechanism = GSSAPI policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null policy-xacml-pdp | security.protocol = PLAINTEXT policy-xacml-pdp | security.providers = null policy-xacml-pdp | send.buffer.bytes = 131072 policy-xacml-pdp | session.timeout.ms = 45000 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 policy-xacml-pdp | ssl.cipher.suites = null policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-xacml-pdp | ssl.endpoint.identification.algorithm = https policy-xacml-pdp | ssl.engine.factory.class = null policy-xacml-pdp | ssl.key.password = null policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 policy-xacml-pdp | ssl.keystore.certificate.chain = null policy-xacml-pdp | ssl.keystore.key = null policy-xacml-pdp | ssl.keystore.location = null policy-xacml-pdp | ssl.keystore.password = null policy-xacml-pdp | ssl.keystore.type = JKS policy-xacml-pdp | ssl.protocol = TLSv1.3 policy-xacml-pdp | ssl.provider = null policy-xacml-pdp | ssl.secure.random.implementation = null policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX policy-xacml-pdp | ssl.truststore.certificates = null policy-xacml-pdp | ssl.truststore.location = null policy-xacml-pdp | ssl.truststore.password = null policy-xacml-pdp | ssl.truststore.type = JKS policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | policy-xacml-pdp | [2025-06-22T18:33:00.175+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-xacml-pdp | [2025-06-22T18:33:00.350+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-xacml-pdp | [2025-06-22T18:33:00.351+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-xacml-pdp | [2025-06-22T18:33:00.351+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750617180348 policy-xacml-pdp | [2025-06-22T18:33:00.355+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-1, groupId=b8801a63-a73c-402f-884b-eb1f60245931] Subscribed to topic(s): policy-pdp-pap policy-xacml-pdp | [2025-06-22T18:33:00.432+00:00|INFO|XacmlPdpApplicationManager|main] Initialization applications org.onap.policy.pdpx.main.parameters.XacmlApplicationParameters@7ec3394b JerseyClient(name=policyApiParameters, https=false, selfSignedCerts=false, hostname=policy-api, port=6969, basePath=null, userName=policyadmin, password=zb!XztG34, client=org.glassfish.jersey.client.JerseyClient@698122b2, baseUrl=http://policy-api:6969/, alive=true) policy-xacml-pdp | [2025-06-22T18:33:00.447+00:00|INFO|XacmlPdpApplicationManager|main] Application guard supports [onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0] policy-xacml-pdp | [2025-06-22T18:33:00.448+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath guard at this path /opt/app/policy/pdpx/apps/guard policy-xacml-pdp | [2025-06-22T18:33:00.448+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/guard policy-xacml-pdp | [2025-06-22T18:33:00.449+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/guard/xacml.properties policy-xacml-pdp | [2025-06-22T18:33:00.450+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-22T18:33:00.450+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.persistenceunit -> OperationsHistoryPU policy-xacml-pdp | [2025-06-22T18:33:00.451+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.name -> GetOperationOutcome policy-xacml-pdp | [2025-06-22T18:33:00.451+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-22T18:33:00.451+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.451+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-22T18:33:00.451+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides policy-xacml-pdp | [2025-06-22T18:33:00.451+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-22T18:33:00.452+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip policy-xacml-pdp | [2025-06-22T18:33:00.452+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.description -> Returns operation outcome policy-xacml-pdp | [2025-06-22T18:33:00.452+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.description -> Returns operation counts based on time window policy-xacml-pdp | [2025-06-22T18:33:00.452+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.password -> policy_user policy-xacml-pdp | [2025-06-22T18:33:00.452+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-22T18:33:00.453+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.issuer -> urn:org:onap:xacml:guard:get-operation-outcome policy-xacml-pdp | [2025-06-22T18:33:00.453+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.persistenceunit -> OperationsHistoryPU policy-xacml-pdp | [2025-06-22T18:33:00.453+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.driver -> org.postgresql.Driver policy-xacml-pdp | [2025-06-22T18:33:00.453+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.name -> CountRecentOperations policy-xacml-pdp | [2025-06-22T18:33:00.454+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-22T18:33:00.454+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.454+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.url -> jdbc:postgresql://postgres:5432/operationshistory policy-xacml-pdp | [2025-06-22T18:33:00.454+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.user -> policy_user policy-xacml-pdp | [2025-06-22T18:33:00.454+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.454+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.issuer -> urn:org:onap:xacml:guard:count-recent-operations policy-xacml-pdp | [2025-06-22T18:33:00.455+00:00|INFO|XacmlPolicyUtils|main] xacml.pip.engines -> count-recent-operations,get-operation-outcome policy-xacml-pdp | [2025-06-22T18:33:00.455+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-22T18:33:00.455+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip policy-xacml-pdp | [2025-06-22T18:33:00.455+00:00|INFO|StdXacmlApplicationServiceProvider|main] {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-22T18:33:00.457+00:00|WARN|XACMLProperties|main] Properties file /usr/lib/jvm/java-17-openjdk/lib/xacml.properties cannot be read. policy-xacml-pdp | [2025-06-22T18:33:00.506+00:00|INFO|XacmlPdpApplicationManager|main] Application optimization supports [onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0] policy-xacml-pdp | [2025-06-22T18:33:00.507+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath optimization at this path /opt/app/policy/pdpx/apps/optimization policy-xacml-pdp | [2025-06-22T18:33:00.507+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/optimization policy-xacml-pdp | [2025-06-22T18:33:00.507+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/optimization/xacml.properties policy-xacml-pdp | [2025-06-22T18:33:00.507+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-22T18:33:00.507+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-22T18:33:00.507+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-22T18:33:00.508+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-22T18:33:00.508+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.508+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-22T18:33:00.508+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-22T18:33:00.508+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-22T18:33:00.508+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-22T18:33:00.508+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.508+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.508+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-22T18:33:00.508+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-22T18:33:00.508+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-22T18:33:00.510+00:00|INFO|XacmlPdpApplicationManager|main] Application naming supports [onap.policies.Naming 1.0.0] policy-xacml-pdp | [2025-06-22T18:33:00.510+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath naming at this path /opt/app/policy/pdpx/apps/naming policy-xacml-pdp | [2025-06-22T18:33:00.511+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/naming policy-xacml-pdp | [2025-06-22T18:33:00.511+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/naming/xacml.properties policy-xacml-pdp | [2025-06-22T18:33:00.511+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-22T18:33:00.511+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-22T18:33:00.511+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-22T18:33:00.512+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-22T18:33:00.512+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.512+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-22T18:33:00.512+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-22T18:33:00.512+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-22T18:33:00.512+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-22T18:33:00.512+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.512+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.512+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-22T18:33:00.512+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-22T18:33:00.512+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-22T18:33:00.515+00:00|INFO|XacmlPdpApplicationManager|main] Application native supports [onap.policies.native.Xacml 1.0.0, onap.policies.native.ToscaXacml 1.0.0] policy-xacml-pdp | [2025-06-22T18:33:00.515+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath native at this path /opt/app/policy/pdpx/apps/native policy-xacml-pdp | [2025-06-22T18:33:00.515+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/native policy-xacml-pdp | [2025-06-22T18:33:00.515+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/native/xacml.properties policy-xacml-pdp | [2025-06-22T18:33:00.515+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-22T18:33:00.516+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-22T18:33:00.516+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-22T18:33:00.516+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-22T18:33:00.516+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.516+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-22T18:33:00.516+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-22T18:33:00.516+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-22T18:33:00.516+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-22T18:33:00.516+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.516+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.516+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-22T18:33:00.516+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-22T18:33:00.516+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-22T18:33:00.518+00:00|INFO|XacmlPdpApplicationManager|main] Application match supports [onap.policies.Match 1.0.0] policy-xacml-pdp | [2025-06-22T18:33:00.518+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath match at this path /opt/app/policy/pdpx/apps/match policy-xacml-pdp | [2025-06-22T18:33:00.518+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/match policy-xacml-pdp | [2025-06-22T18:33:00.518+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/match/xacml.properties policy-xacml-pdp | [2025-06-22T18:33:00.518+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-22T18:33:00.518+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-22T18:33:00.518+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-22T18:33:00.518+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-22T18:33:00.518+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.518+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-22T18:33:00.519+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-22T18:33:00.519+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-22T18:33:00.519+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-22T18:33:00.519+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.519+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.519+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-22T18:33:00.519+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-22T18:33:00.519+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-22T18:33:00.520+00:00|INFO|XacmlPdpApplicationManager|main] Application monitoring supports [onap.Monitoring 1.0.0] policy-xacml-pdp | [2025-06-22T18:33:00.521+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath monitoring at this path /opt/app/policy/pdpx/apps/monitoring policy-xacml-pdp | [2025-06-22T18:33:00.521+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/monitoring policy-xacml-pdp | [2025-06-22T18:33:00.521+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/monitoring/xacml.properties policy-xacml-pdp | [2025-06-22T18:33:00.521+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-22T18:33:00.521+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> policy-xacml-pdp | [2025-06-22T18:33:00.521+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory policy-xacml-pdp | [2025-06-22T18:33:00.521+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory policy-xacml-pdp | [2025-06-22T18:33:00.522+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.522+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory policy-xacml-pdp | [2025-06-22T18:33:00.522+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-22T18:33:00.522+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> policy-xacml-pdp | [2025-06-22T18:33:00.522+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory policy-xacml-pdp | [2025-06-22T18:33:00.522+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.522+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory policy-xacml-pdp | [2025-06-22T18:33:00.522+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory policy-xacml-pdp | [2025-06-22T18:33:00.523+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory policy-xacml-pdp | [2025-06-22T18:33:00.523+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} policy-xacml-pdp | [2025-06-22T18:33:00.523+00:00|INFO|XacmlPdpApplicationManager|main] Finished applications initialization {optimize=org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplication@2b95e48b, native=org.onap.policy.xacml.pdp.application.nativ.NativePdpApplication@4a3329b9, guard=org.onap.policy.xacml.pdp.application.guard.GuardPdpApplication@3dddefd8, naming=org.onap.policy.xacml.pdp.application.naming.NamingPdpApplication@160ac7fb, match=org.onap.policy.xacml.pdp.application.match.MatchPdpApplication@12bfd80d, configure=org.onap.policy.xacml.pdp.application.monitoring.MonitoringPdpApplication@41925502} policy-xacml-pdp | [2025-06-22T18:33:00.549+00:00|INFO|XacmlPdpHearbeatPublisher|main] heartbeat topic probe 4000ms policy-xacml-pdp | [2025-06-22T18:33:00.779+00:00|INFO|ServiceManager|main] service manager starting policy-xacml-pdp | [2025-06-22T18:33:00.779+00:00|INFO|ServiceManager|main] service manager starting XACML PDP parameters policy-xacml-pdp | [2025-06-22T18:33:00.779+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-xacml-pdp | [2025-06-22T18:33:00.779+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b8801a63-a73c-402f-884b-eb1f60245931, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5f574cc2 policy-xacml-pdp | [2025-06-22T18:33:00.793+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b8801a63-a73c-402f-884b-eb1f60245931, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-xacml-pdp | [2025-06-22T18:33:00.794+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-xacml-pdp | allow.auto.create.topics = true policy-xacml-pdp | auto.commit.interval.ms = 5000 policy-xacml-pdp | auto.include.jmx.reporter = true policy-xacml-pdp | auto.offset.reset = latest policy-xacml-pdp | bootstrap.servers = [kafka:9092] policy-xacml-pdp | check.crcs = true policy-xacml-pdp | client.dns.lookup = use_all_dns_ips policy-xacml-pdp | client.id = consumer-b8801a63-a73c-402f-884b-eb1f60245931-2 policy-xacml-pdp | client.rack = policy-xacml-pdp | connections.max.idle.ms = 540000 policy-xacml-pdp | default.api.timeout.ms = 60000 policy-xacml-pdp | enable.auto.commit = true policy-xacml-pdp | enable.metrics.push = true policy-xacml-pdp | exclude.internal.topics = true policy-xacml-pdp | fetch.max.bytes = 52428800 policy-xacml-pdp | fetch.max.wait.ms = 500 policy-xacml-pdp | fetch.min.bytes = 1 policy-xacml-pdp | group.id = b8801a63-a73c-402f-884b-eb1f60245931 policy-xacml-pdp | group.instance.id = null policy-xacml-pdp | group.protocol = classic policy-xacml-pdp | group.remote.assignor = null policy-xacml-pdp | heartbeat.interval.ms = 3000 policy-xacml-pdp | interceptor.classes = [] policy-xacml-pdp | internal.leave.group.on.close = true policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-xacml-pdp | isolation.level = read_uncommitted policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | max.partition.fetch.bytes = 1048576 policy-xacml-pdp | max.poll.interval.ms = 300000 policy-xacml-pdp | max.poll.records = 500 policy-xacml-pdp | metadata.max.age.ms = 300000 policy-xacml-pdp | metadata.recovery.strategy = none policy-xacml-pdp | metric.reporters = [] policy-xacml-pdp | metrics.num.samples = 2 policy-xacml-pdp | metrics.recording.level = INFO policy-xacml-pdp | metrics.sample.window.ms = 30000 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-xacml-pdp | receive.buffer.bytes = 65536 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 policy-xacml-pdp | reconnect.backoff.ms = 50 policy-xacml-pdp | request.timeout.ms = 30000 policy-xacml-pdp | retry.backoff.max.ms = 1000 policy-xacml-pdp | retry.backoff.ms = 100 policy-xacml-pdp | sasl.client.callback.handler.class = null policy-xacml-pdp | sasl.jaas.config = null policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | sasl.kerberos.service.name = null policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-xacml-pdp | sasl.login.callback.handler.class = null policy-xacml-pdp | sasl.login.class = null policy-xacml-pdp | sasl.login.connect.timeout.ms = null policy-xacml-pdp | sasl.login.read.timeout.ms = null policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 policy-xacml-pdp | sasl.mechanism = GSSAPI policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null policy-xacml-pdp | security.protocol = PLAINTEXT policy-xacml-pdp | security.providers = null policy-xacml-pdp | send.buffer.bytes = 131072 policy-xacml-pdp | session.timeout.ms = 45000 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 policy-xacml-pdp | ssl.cipher.suites = null policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-xacml-pdp | ssl.endpoint.identification.algorithm = https policy-xacml-pdp | ssl.engine.factory.class = null policy-xacml-pdp | ssl.key.password = null policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 policy-xacml-pdp | ssl.keystore.certificate.chain = null policy-xacml-pdp | ssl.keystore.key = null policy-xacml-pdp | ssl.keystore.location = null policy-xacml-pdp | ssl.keystore.password = null policy-xacml-pdp | ssl.keystore.type = JKS policy-xacml-pdp | ssl.protocol = TLSv1.3 policy-xacml-pdp | ssl.provider = null policy-xacml-pdp | ssl.secure.random.implementation = null policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX policy-xacml-pdp | ssl.truststore.certificates = null policy-xacml-pdp | ssl.truststore.location = null policy-xacml-pdp | ssl.truststore.password = null policy-xacml-pdp | ssl.truststore.type = JKS policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-xacml-pdp | policy-xacml-pdp | [2025-06-22T18:33:00.794+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-xacml-pdp | [2025-06-22T18:33:00.807+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-xacml-pdp | [2025-06-22T18:33:00.807+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-xacml-pdp | [2025-06-22T18:33:00.807+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750617180807 policy-xacml-pdp | [2025-06-22T18:33:00.808+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-2, groupId=b8801a63-a73c-402f-884b-eb1f60245931] Subscribed to topic(s): policy-pdp-pap policy-xacml-pdp | [2025-06-22T18:33:00.809+00:00|INFO|ServiceManager|main] service manager starting topics policy-xacml-pdp | [2025-06-22T18:33:00.809+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b8801a63-a73c-402f-884b-eb1f60245931, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-xacml-pdp | [2025-06-22T18:33:00.809+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6026ad9f-a7e0-469b-b2ef-67bd055ea11d, alive=false, publisher=null]]: starting policy-xacml-pdp | [2025-06-22T18:33:00.821+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-xacml-pdp | acks = -1 policy-xacml-pdp | auto.include.jmx.reporter = true policy-xacml-pdp | batch.size = 16384 policy-xacml-pdp | bootstrap.servers = [kafka:9092] policy-xacml-pdp | buffer.memory = 33554432 policy-xacml-pdp | client.dns.lookup = use_all_dns_ips policy-xacml-pdp | client.id = producer-1 policy-xacml-pdp | compression.gzip.level = -1 policy-xacml-pdp | compression.lz4.level = 9 policy-xacml-pdp | compression.type = none policy-xacml-pdp | compression.zstd.level = 3 policy-xacml-pdp | connections.max.idle.ms = 540000 policy-xacml-pdp | delivery.timeout.ms = 120000 policy-xacml-pdp | enable.idempotence = true policy-xacml-pdp | enable.metrics.push = true policy-xacml-pdp | interceptor.classes = [] policy-xacml-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-xacml-pdp | linger.ms = 0 policy-xacml-pdp | max.block.ms = 60000 policy-xacml-pdp | max.in.flight.requests.per.connection = 5 policy-xacml-pdp | max.request.size = 1048576 policy-xacml-pdp | metadata.max.age.ms = 300000 policy-xacml-pdp | metadata.max.idle.ms = 300000 policy-xacml-pdp | metadata.recovery.strategy = none policy-xacml-pdp | metric.reporters = [] policy-xacml-pdp | metrics.num.samples = 2 policy-xacml-pdp | metrics.recording.level = INFO policy-xacml-pdp | metrics.sample.window.ms = 30000 policy-xacml-pdp | partitioner.adaptive.partitioning.enable = true policy-xacml-pdp | partitioner.availability.timeout.ms = 0 policy-xacml-pdp | partitioner.class = null policy-xacml-pdp | partitioner.ignore.keys = false policy-xacml-pdp | receive.buffer.bytes = 32768 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 policy-xacml-pdp | reconnect.backoff.ms = 50 policy-xacml-pdp | request.timeout.ms = 30000 policy-xacml-pdp | retries = 2147483647 policy-xacml-pdp | retry.backoff.max.ms = 1000 policy-xacml-pdp | retry.backoff.ms = 100 policy-xacml-pdp | sasl.client.callback.handler.class = null policy-xacml-pdp | sasl.jaas.config = null policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-xacml-pdp | sasl.kerberos.service.name = null policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-xacml-pdp | sasl.login.callback.handler.class = null policy-xacml-pdp | sasl.login.class = null policy-xacml-pdp | sasl.login.connect.timeout.ms = null policy-xacml-pdp | sasl.login.read.timeout.ms = null policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 policy-xacml-pdp | sasl.mechanism = GSSAPI policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null policy-xacml-pdp | security.protocol = PLAINTEXT policy-xacml-pdp | security.providers = null policy-xacml-pdp | send.buffer.bytes = 131072 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 policy-xacml-pdp | ssl.cipher.suites = null policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-xacml-pdp | ssl.endpoint.identification.algorithm = https policy-xacml-pdp | ssl.engine.factory.class = null policy-xacml-pdp | ssl.key.password = null policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 policy-xacml-pdp | ssl.keystore.certificate.chain = null policy-xacml-pdp | ssl.keystore.key = null policy-xacml-pdp | ssl.keystore.location = null policy-xacml-pdp | ssl.keystore.password = null policy-xacml-pdp | ssl.keystore.type = JKS policy-xacml-pdp | ssl.protocol = TLSv1.3 policy-xacml-pdp | ssl.provider = null policy-xacml-pdp | ssl.secure.random.implementation = null policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX policy-xacml-pdp | ssl.truststore.certificates = null policy-xacml-pdp | ssl.truststore.location = null policy-xacml-pdp | ssl.truststore.password = null policy-xacml-pdp | ssl.truststore.type = JKS policy-xacml-pdp | transaction.timeout.ms = 60000 policy-xacml-pdp | transactional.id = null policy-xacml-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-xacml-pdp | policy-xacml-pdp | [2025-06-22T18:33:00.822+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-xacml-pdp | [2025-06-22T18:33:00.833+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-xacml-pdp | [2025-06-22T18:33:00.856+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-xacml-pdp | [2025-06-22T18:33:00.856+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-xacml-pdp | [2025-06-22T18:33:00.856+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750617180856 policy-xacml-pdp | [2025-06-22T18:33:00.857+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6026ad9f-a7e0-469b-b2ef-67bd055ea11d, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-xacml-pdp | [2025-06-22T18:33:00.857+00:00|INFO|ServiceManager|main] service manager starting Terminate PDP policy-xacml-pdp | [2025-06-22T18:33:00.857+00:00|INFO|ServiceManager|main] service manager starting Heartbeat Publisher policy-xacml-pdp | [2025-06-22T18:33:00.858+00:00|INFO|ServiceManager|main] service manager starting REST Server policy-xacml-pdp | [2025-06-22T18:33:00.858+00:00|INFO|ServiceManager|main] service manager starting policy-xacml-pdp | [2025-06-22T18:33:00.858+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-xacml-pdp | [2025-06-22T18:33:00.864+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b8801a63-a73c-402f-884b-eb1f60245931, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: registering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007f1cb72a7dd0@74e70cf1 policy-xacml-pdp | [2025-06-22T18:33:00.865+00:00|INFO|SingleThreadedBusTopicSource|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b8801a63-a73c-402f-884b-eb1f60245931, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=2]]]]: register: start not attempted policy-xacml-pdp | [2025-06-22T18:33:00.868+00:00|INFO|OrderedServiceImpl|pool-2-thread-1] ***** OrderedServiceImpl implementers: policy-xacml-pdp | [] policy-xacml-pdp | [2025-06-22T18:33:00.870+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"4ec87877-bd1c-4716-8bea-742c4412a581","timestampMs":1750617180859,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf"} policy-xacml-pdp | [2025-06-22T18:33:00.858+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: STARTING policy-xacml-pdp | [2025-06-22T18:33:00.871+00:00|INFO|ServiceManager|main] service manager started policy-xacml-pdp | [2025-06-22T18:33:00.871+00:00|INFO|ServiceManager|main] service manager started policy-xacml-pdp | [2025-06-22T18:33:00.871+00:00|INFO|Main|main] Started policy-xacml-pdp service successfully. policy-xacml-pdp | [2025-06-22T18:33:00.871+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:,STOPPED}})]: RUN policy-xacml-pdp | [2025-06-22T18:33:01.319+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-2, groupId=b8801a63-a73c-402f-884b-eb1f60245931] Cluster ID: Y94SwhAjTxOcMpy5L2vWew policy-xacml-pdp | [2025-06-22T18:33:01.319+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: Y94SwhAjTxOcMpy5L2vWew policy-xacml-pdp | [2025-06-22T18:33:01.320+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-2, groupId=b8801a63-a73c-402f-884b-eb1f60245931] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-xacml-pdp | [2025-06-22T18:33:01.330+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-xacml-pdp | [2025-06-22T18:33:01.331+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-2, groupId=b8801a63-a73c-402f-884b-eb1f60245931] (Re-)joining group policy-xacml-pdp | [2025-06-22T18:33:01.368+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-2, groupId=b8801a63-a73c-402f-884b-eb1f60245931] Request joining group due to: need to re-join with the given member-id: consumer-b8801a63-a73c-402f-884b-eb1f60245931-2-b7b144a5-4bd6-44fd-9d18-4391bb31d3ad policy-xacml-pdp | [2025-06-22T18:33:01.369+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-2, groupId=b8801a63-a73c-402f-884b-eb1f60245931] (Re-)joining group policy-xacml-pdp | [2025-06-22T18:33:01.536+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-xacml-pdp | [2025-06-22T18:33:01.537+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-xacml-pdp | [2025-06-22T18:33:04.375+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-2, groupId=b8801a63-a73c-402f-884b-eb1f60245931] Successfully joined group with generation Generation{generationId=1, memberId='consumer-b8801a63-a73c-402f-884b-eb1f60245931-2-b7b144a5-4bd6-44fd-9d18-4391bb31d3ad', protocol='range'} policy-xacml-pdp | [2025-06-22T18:33:04.385+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-2, groupId=b8801a63-a73c-402f-884b-eb1f60245931] Finished assignment for group at generation 1: {consumer-b8801a63-a73c-402f-884b-eb1f60245931-2-b7b144a5-4bd6-44fd-9d18-4391bb31d3ad=Assignment(partitions=[policy-pdp-pap-0])} policy-xacml-pdp | [2025-06-22T18:33:04.395+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-2, groupId=b8801a63-a73c-402f-884b-eb1f60245931] Successfully synced group in generation Generation{generationId=1, memberId='consumer-b8801a63-a73c-402f-884b-eb1f60245931-2-b7b144a5-4bd6-44fd-9d18-4391bb31d3ad', protocol='range'} policy-xacml-pdp | [2025-06-22T18:33:04.395+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-2, groupId=b8801a63-a73c-402f-884b-eb1f60245931] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-xacml-pdp | [2025-06-22T18:33:04.397+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-2, groupId=b8801a63-a73c-402f-884b-eb1f60245931] Adding newly assigned partitions: policy-pdp-pap-0 policy-xacml-pdp | [2025-06-22T18:33:04.405+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-2, groupId=b8801a63-a73c-402f-884b-eb1f60245931] Found no committed offset for partition policy-pdp-pap-0 policy-xacml-pdp | [2025-06-22T18:33:04.420+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b8801a63-a73c-402f-884b-eb1f60245931-2, groupId=b8801a63-a73c-402f-884b-eb1f60245931] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-xacml-pdp | [2025-06-22T18:33:05.418+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"4ec87877-bd1c-4716-8bea-742c4412a581","timestampMs":1750617180859,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf"} policy-xacml-pdp | [2025-06-22T18:33:05.469+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"4ec87877-bd1c-4716-8bea-742c4412a581","timestampMs":1750617180859,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf"} policy-xacml-pdp | [2025-06-22T18:33:05.472+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK policy-xacml-pdp | [2025-06-22T18:33:05.473+00:00|INFO|BidirectionalTopicClient|KAFKA-source-policy-pdp-pap] topic policy-pdp-pap is ready; found matching message PdpTopicCheck(super=PdpMessage(messageName=PDP_TOPIC_CHECK, requestId=4ec87877-bd1c-4716-8bea-742c4412a581, timestampMs=1750617180859, name=xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf, pdpGroup=null, pdpSubgroup=null)) policy-xacml-pdp | [2025-06-22T18:33:05.483+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b8801a63-a73c-402f-884b-eb1f60245931, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=1, locked=false, #topicListeners=2]]]]: unregistering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007f1cb72a7dd0@74e70cf1 policy-xacml-pdp | [2025-06-22T18:33:05.487+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=bff4a99a-a6ed-43b1-832c-5d5e76e24243, timestampMs=1750617185485, name=xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf, pdpGroup=defaultGroup, pdpSubgroup=null), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[], deploymentInstanceInfo=null, properties=null, response=null) policy-xacml-pdp | [2025-06-22T18:33:05.497+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"bff4a99a-a6ed-43b1-832c-5d5e76e24243","timestampMs":1750617185485,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup"} policy-xacml-pdp | [2025-06-22T18:33:05.519+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"bff4a99a-a6ed-43b1-832c-5d5e76e24243","timestampMs":1750617185485,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup"} policy-xacml-pdp | [2025-06-22T18:33:05.519+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-22T18:33:06.330+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"fd4b127c-b2f1-4302-9032-f1af07ff361e","timestampMs":1750617186247,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:33:06.341+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=fd4b127c-b2f1-4302-9032-f1af07ff361e, timestampMs=1750617186247, name=xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-96c746df-a679-451e-9b08-409400ed4a9d, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.Naming, typeVersion=1.0.0, properties={policy-instance-name=ONAP_NF_NAMING_TIMESTAMP, naming-models=[{naming-type=VNF, naming-recipe=AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP, name-operation=to_lower_case(), naming-properties=[{property-name=AIC_CLOUD_REGION}, {property-name=CONSTANT, property-value=onap-nf}, {property-name=TIMESTAMP}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VNFC, naming-recipe=VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=ENTIRETY, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}, {property-name=NFC_NAMING_CODE}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VF-MODULE, naming-recipe=VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-value=-, property-name=DELIMITER}, {property-name=VF_MODULE_LABEL}, {property-name=VF_MODULE_TYPE}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=PRECEEDING, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}]}]}))], policiesToBeUndeployed=[]) policy-xacml-pdp | [2025-06-22T18:33:06.350+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP type: onap.policies.Naming weight: null policy: policy-xacml-pdp | {"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}} policy-xacml-pdp | [2025-06-22T18:33:06.411+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.Naming policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.Naming policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 1.0.0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.Naming policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-22T18:33:06.417+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/naming/xacml.properties policy-xacml-pdp | [2025-06-22T18:33:06.438+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP, policy-version=1.0.0} into application naming policy-xacml-pdp | [2025-06-22T18:33:06.440+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"fd4b127c-b2f1-4302-9032-f1af07ff361e","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"cadc5171-7edc-496f-9e94-5b0f94c3718e","timestampMs":1750617186439,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:33:06.451+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=a10f86eb-d0b2-4e4b-b937-f4cb2f8608b5, timestampMs=1750617186451, name=xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) policy-xacml-pdp | [2025-06-22T18:33:06.451+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"a10f86eb-d0b2-4e4b-b937-f4cb2f8608b5","timestampMs":1750617186451,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:33:06.483+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"fd4b127c-b2f1-4302-9032-f1af07ff361e","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"cadc5171-7edc-496f-9e94-5b0f94c3718e","timestampMs":1750617186439,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:33:06.483+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-22T18:33:06.489+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"a10f86eb-d0b2-4e4b-b937-f4cb2f8608b5","timestampMs":1750617186451,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:33:06.490+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-22T18:33:06.522+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"0bc34fd7-53f7-412c-9f44-d9b5724f38f6","timestampMs":1750617186248,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:33:06.525+00:00|INFO|XacmlPdpStateChangeListener|KAFKA-source-policy-pdp-pap] PDP State Change message has been received from the PAP - PdpStateChange(super=PdpMessage(messageName=PDP_STATE_CHANGE, requestId=0bc34fd7-53f7-412c-9f44-d9b5724f38f6, timestampMs=1750617186248, name=xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-96c746df-a679-451e-9b08-409400ed4a9d, state=ACTIVE) policy-xacml-pdp | [2025-06-22T18:33:06.527+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] set state of org.onap.policy.pdpx.main.XacmlState@1118d6d8 to ACTIVE policy-xacml-pdp | [2025-06-22T18:33:06.527+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] State change: ACTIVE - Starting rest controller policy-xacml-pdp | [2025-06-22T18:33:06.528+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"0bc34fd7-53f7-412c-9f44-d9b5724f38f6","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"284e7ddc-d7b5-43f1-9956-e81fa8159eda","timestampMs":1750617186527,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:33:06.543+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"0bc34fd7-53f7-412c-9f44-d9b5724f38f6","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"284e7ddc-d7b5-43f1-9956-e81fa8159eda","timestampMs":1750617186527,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:33:06.544+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-22T18:33:07.231+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"32a75b59-731e-4b21-82e8-7343ad06fba0","timestampMs":1750617186925,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:33:07.232+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=32a75b59-731e-4b21-82e8-7343ad06fba0, timestampMs=1750617186925, name=xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-96c746df-a679-451e-9b08-409400ed4a9d, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[], policiesToBeUndeployed=[]) policy-xacml-pdp | [2025-06-22T18:33:07.233+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"32a75b59-731e-4b21-82e8-7343ad06fba0","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"624600b7-5634-4b28-8329-f7013fb0fd01","timestampMs":1750617187233,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:33:07.246+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"32a75b59-731e-4b21-82e8-7343ad06fba0","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"624600b7-5634-4b28-8329-f7013fb0fd01","timestampMs":1750617187233,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:33:07.246+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-22T18:33:26.065+00:00|INFO|RequestLog|qtp2014233765-33] 172.17.0.1 - - [22/Jun/2025:18:33:26 +0000] "GET / HTTP/1.1" 401 423 "" "curl/7.58.0" policy-xacml-pdp | [2025-06-22T18:33:35.600+00:00|INFO|RequestLog|qtp2014233765-27] 172.17.0.2 - policyadmin [22/Jun/2025:18:33:35 +0000] "GET /metrics HTTP/1.1" 200 2128 "" "Prometheus/3.4.1" policy-xacml-pdp | [2025-06-22T18:34:11.970+00:00|INFO|RequestLog|qtp2014233765-33] 172.17.0.6 - policyadmin [22/Jun/2025:18:34:11 +0000] "GET /policy/pdpx/v1/healthcheck?null HTTP/1.1" 200 110 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-22T18:34:11.989+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.6 - policyadmin [22/Jun/2025:18:34:11 +0000] "GET /metrics?null HTTP/1.1" 200 2056 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-22T18:34:13.538+00:00|INFO|GuardTranslator|qtp2014233765-26] Converting Request DecisionRequest(onapName=Guard, onapComponent=Guard-component, onapInstance=Guard-component-instance, requestId=unique-request-guard-1, context=null, action=guard, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={guard={actor=APPC, operation=ModifyConfig, target=f17face5-69cb-4c88-9e0b-7426db7edddd, requestId=c7c6a4aa-bb61-4a15-b831-ba1472dd4a65, clname=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}}) policy-xacml-pdp | [2025-06-22T18:34:13.559+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-dateTime policy-xacml-pdp | [2025-06-22T18:34:13.559+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-date policy-xacml-pdp | [2025-06-22T18:34:13.559+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-time policy-xacml-pdp | [2025-06-22T18:34:13.559+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:org:onap:guard:timezone policy-xacml-pdp | [2025-06-22T18:34:13.560+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:org:onap:guard:target:vf-count policy-xacml-pdp | [2025-06-22T18:34:13.560+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-name policy-xacml-pdp | [2025-06-22T18:34:13.560+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-id policy-xacml-pdp | [2025-06-22T18:34:13.560+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-type policy-xacml-pdp | [2025-06-22T18:34:13.560+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.nf-naming-code policy-xacml-pdp | [2025-06-22T18:34:13.560+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:org:onap:guard:target:vserver.vserver-id policy-xacml-pdp | [2025-06-22T18:34:13.560+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:org:onap:guard:target:cloud-region.cloud-region-id policy-xacml-pdp | [2025-06-22T18:34:13.565+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Constructed using properties {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-22T18:34:13.565+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-22T18:34:13.565+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Combining root policies with urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides policy-xacml-pdp | [2025-06-22T18:34:13.571+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Root Policies: 1 policy-xacml-pdp | [2025-06-22T18:34:13.571+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Referenced Policies: 0 policy-xacml-pdp | [2025-06-22T18:34:13.572+00:00|INFO|StdPolicyFinder|qtp2014233765-26] Updating policy map with policy 8a1996a4-46cf-48e2-b7b5-29f37a67a3e1 version 1.0 policy-xacml-pdp | [2025-06-22T18:34:13.575+00:00|INFO|StdOnapPip|qtp2014233765-26] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-22T18:34:13.660+00:00|INFO|LogHelper|qtp2014233765-26] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] policy-xacml-pdp | [2025-06-22T18:34:13.693+00:00|INFO|Version|qtp2014233765-26] HHH000412: Hibernate ORM core version 6.6.16.Final policy-xacml-pdp | [2025-06-22T18:34:13.716+00:00|INFO|RegionFactoryInitiator|qtp2014233765-26] HHH000026: Second-level cache disabled policy-xacml-pdp | [2025-06-22T18:34:13.853+00:00|WARN|pooling|qtp2014233765-26] HHH10001002: Using built-in connection pool (not intended for production use) policy-xacml-pdp | [2025-06-22T18:34:14.065+00:00|INFO|pooling|qtp2014233765-26] HHH10001005: Database info: policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] policy-xacml-pdp | Database driver: org.postgresql.Driver policy-xacml-pdp | Database version: 16.4 policy-xacml-pdp | Autocommit mode: false policy-xacml-pdp | Isolation level: undefined/unknown policy-xacml-pdp | Minimum pool size: 1 policy-xacml-pdp | Maximum pool size: 20 policy-xacml-pdp | [2025-06-22T18:34:15.007+00:00|INFO|JtaPlatformInitiator|qtp2014233765-26] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-xacml-pdp | [2025-06-22T18:34:15.044+00:00|INFO|StdOnapPip|qtp2014233765-26] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} policy-xacml-pdp | [2025-06-22T18:34:15.048+00:00|INFO|LogHelper|qtp2014233765-26] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] policy-xacml-pdp | [2025-06-22T18:34:15.050+00:00|INFO|RegionFactoryInitiator|qtp2014233765-26] HHH000026: Second-level cache disabled policy-xacml-pdp | [2025-06-22T18:34:15.068+00:00|WARN|pooling|qtp2014233765-26] HHH10001002: Using built-in connection pool (not intended for production use) policy-xacml-pdp | [2025-06-22T18:34:15.082+00:00|INFO|pooling|qtp2014233765-26] HHH10001005: Database info: policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] policy-xacml-pdp | Database driver: org.postgresql.Driver policy-xacml-pdp | Database version: 16.4 policy-xacml-pdp | Autocommit mode: false policy-xacml-pdp | Isolation level: undefined/unknown policy-xacml-pdp | Minimum pool size: 1 policy-xacml-pdp | Maximum pool size: 20 policy-xacml-pdp | [2025-06-22T18:34:15.114+00:00|INFO|JtaPlatformInitiator|qtp2014233765-26] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-xacml-pdp | [2025-06-22T18:34:15.118+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-26] Elapsed Time: 1557ms policy-xacml-pdp | [2025-06-22T18:34:15.118+00:00|INFO|GuardTranslator|qtp2014233765-26] Converting Response {results=[{decision=NotApplicable,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component-instance}],includeInResults=true}{attributeId=urn:org:onap:guard:request:request-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=unique-request-guard-1}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:guard:clname:clname-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}],includeInResults=true}{attributeId=urn:org:onap:guard:actor:actor-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=APPC}],includeInResults=true}{attributeId=urn:org:onap:guard:operation:operation-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ModifyConfig}],includeInResults=true}{attributeId=urn:org:onap:guard:target:target-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=f17face5-69cb-4c88-9e0b-7426db7edddd}],includeInResults=true}]}]}]} policy-xacml-pdp | [2025-06-22T18:34:15.123+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.6 - policyadmin [22/Jun/2025:18:34:13 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 19 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-22T18:34:15.749+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"106d6c6c-ee16-4b07-a6ff-30947cd454e4","timestampMs":1750617255671,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:34:15.750+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=106d6c6c-ee16-4b07-a6ff-30947cd454e4, timestampMs=1750617255671, name=xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-96c746df-a679-451e-9b08-409400ed4a9d, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.monitoring.tcagen2, typeVersion=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}})), ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.optimization.resource.AffinityPolicy, typeVersion=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}))], policiesToBeUndeployed=[]) policy-xacml-pdp | [2025-06-22T18:34:15.751+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: onap.restart.tca type: onap.policies.monitoring.tcagen2 weight: null policy: policy-xacml-pdp | {"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}} policy-xacml-pdp | [2025-06-22T18:34:15.776+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.restart.tca policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.monitoring.tcagen2 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.monitoring.tcagen2 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 1.0.0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.restart.tca policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.monitoring.tcagen2 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-22T18:34:15.777+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties policy-xacml-pdp | [2025-06-22T18:34:15.778+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} into application monitoring policy-xacml-pdp | [2025-06-22T18:34:15.778+00:00|INFO|OptimizationPdpApplication|KAFKA-source-policy-pdp-pap] optimization can support onap.policies.optimization.resource.AffinityPolicy 1.0.0 policy-xacml-pdp | [2025-06-22T18:34:15.778+00:00|ERROR|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] PolicyType not found in data area yet /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml policy-xacml-pdp | java.nio.file.NoSuchFileException: /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) policy-xacml-pdp | at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:218) policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:380) policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:432) policy-xacml-pdp | at java.base/java.nio.file.Files.readAllBytes(Files.java:3288) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.loadPolicyType(StdMatchableTranslator.java:515) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.findPolicyType(StdMatchableTranslator.java:480) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.convertPolicy(StdMatchableTranslator.java:241) policy-xacml-pdp | at org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplicationTranslator.convertPolicy(OptimizationPdpApplicationTranslator.java:72) policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdXacmlApplicationServiceProvider.loadPolicy(StdXacmlApplicationServiceProvider.java:127) policy-xacml-pdp | at org.onap.policy.pdpx.main.rest.XacmlPdpApplicationManager.loadDeployedPolicy(XacmlPdpApplicationManager.java:199) policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.XacmlPdpUpdatePublisher.handlePdpUpdate(XacmlPdpUpdatePublisher.java:91) policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:72) policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:36) policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.ScoListener.onTopicEvent(ScoListener.java:75) policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher.onTopicEvent(MessageTypeDispatcher.java:97) policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.JsonListener.onTopicEvent(JsonListener.java:61) policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.TopicBase.broadcast(TopicBase.java:170) policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.fetchAllMessages(SingleThreadedBusTopicSource.java:252) policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.run(SingleThreadedBusTopicSource.java:235) policy-xacml-pdp | at java.base/java.lang.Thread.run(Thread.java:840) policy-xacml-pdp | [2025-06-22T18:34:15.810+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls policy-xacml-pdp | [2025-06-22T18:34:15.813+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls policy-xacml-pdp | [2025-06-22T18:34:16.259+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] Successfully pulled onap.policies.optimization.resource.AffinityPolicy 1.0.0 policy-xacml-pdp | [2025-06-22T18:34:16.288+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.resource.AffinityPolicy:1.0.0 policy-xacml-pdp | [2025-06-22T18:34:16.288+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Retrieving datatype policy.data.affinityProperties_properties policy-xacml-pdp | [2025-06-22T18:34:16.288+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.Resource:1.0.0 policy-xacml-pdp | [2025-06-22T18:34:16.289+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.Optimization:1.0.0 policy-xacml-pdp | [2025-06-22T18:34:16.289+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Found root - done scanning policy-xacml-pdp | [2025-06-22T18:34:16.289+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: OSDF_CASABLANCA.Affinity_Default type: onap.policies.optimization.resource.AffinityPolicy weight: 0 policy: policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} policy-xacml-pdp | [2025-06-22T18:34:16.311+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | IF exists and is equal policy-xacml-pdp | policy-xacml-pdp | Does the policy-type attribute exist? policy-xacml-pdp | policy-xacml-pdp | Get the size of policy-type attributes policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Is this policy-type in the list? policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | OSDF_CASABLANCA.Affinity_Default policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-22T18:34:16.332+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Default is to PERMIT if the policy matches. policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | IF exists and is equal policy-xacml-pdp | policy-xacml-pdp | Does the policy-type attribute exist? policy-xacml-pdp | policy-xacml-pdp | Get the size of policy-type attributes policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | Is this policy-type in the list? policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | OSDF_CASABLANCA.Affinity_Default policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | 0 policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | onap.policies.optimization.resource.AffinityPolicy policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | policy-xacml-pdp | [2025-06-22T18:34:16.332+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/optimization/xacml.properties policy-xacml-pdp | [2025-06-22T18:34:16.333+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0} into application optimization policy-xacml-pdp | [2025-06-22T18:34:16.333+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"106d6c6c-ee16-4b07-a6ff-30947cd454e4","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"1862f0b6-e099-42b3-aa99-47fa0b0ba3dc","timestampMs":1750617256333,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:34:16.346+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"106d6c6c-ee16-4b07-a6ff-30947cd454e4","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"1862f0b6-e099-42b3-aa99-47fa0b0ba3dc","timestampMs":1750617256333,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:34:16.347+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-22T18:34:35.575+00:00|INFO|RequestLog|qtp2014233765-31] 172.17.0.2 - policyadmin [22/Jun/2025:18:34:35 +0000] "GET /metrics HTTP/1.1" 200 2174 "" "Prometheus/3.4.1" policy-xacml-pdp | [2025-06-22T18:34:39.953+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) policy-xacml-pdp | [2025-06-22T18:34:39.954+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:policy-type policy-xacml-pdp | [2025-06-22T18:34:39.954+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | [2025-06-22T18:34:39.954+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-22T18:34:39.954+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-22T18:34:39.956+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Loading policy file /opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml policy-xacml-pdp | [2025-06-22T18:34:39.978+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Root Policies: 1 policy-xacml-pdp | [2025-06-22T18:34:39.978+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Referenced Policies: 0 policy-xacml-pdp | [2025-06-22T18:34:39.978+00:00|INFO|StdPolicyFinder|qtp2014233765-30] Updating policy map with policy 34bd182e-3af0-45f2-8f07-efecbf67926f version 1.0 policy-xacml-pdp | [2025-06-22T18:34:39.978+00:00|INFO|StdPolicyFinder|qtp2014233765-30] Updating policy map with policy onap.restart.tca version 1.0.0 policy-xacml-pdp | [2025-06-22T18:34:39.997+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-30] Elapsed Time: 43ms policy-xacml-pdp | [2025-06-22T18:34:39.997+00:00|INFO|StdBaseTranslator|qtp2014233765-30] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=34bd182e-3af0-45f2-8f07-efecbf67926f,version=1.0}]}]} policy-xacml-pdp | [2025-06-22T18:34:39.997+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-22T18:34:39.997+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator policy-xacml-pdp | [2025-06-22T18:34:39.998+00:00|INFO|MonitoringPdpApplication|qtp2014233765-30] Abbreviating decision results DecisionResponse(status=null, message=null, advice=null, obligations=null, policies={onap.restart.tca={type=onap.policies.monitoring.tcagen2, type_version=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}}, name=onap.restart.tca, version=1.0.0, metadata={policy-id=onap.restart.tca, policy-version=1.0.0}}}, attributes=null) policy-xacml-pdp | [2025-06-22T18:34:40.001+00:00|INFO|RequestLog|qtp2014233765-30] 172.17.0.6 - policyadmin [22/Jun/2025:18:34:39 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 146 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-22T18:34:40.013+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) policy-xacml-pdp | [2025-06-22T18:34:40.013+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:org:onap:policy-type policy-xacml-pdp | [2025-06-22T18:34:40.015+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-30] Elapsed Time: 2ms policy-xacml-pdp | [2025-06-22T18:34:40.015+00:00|INFO|StdBaseTranslator|qtp2014233765-30] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=34bd182e-3af0-45f2-8f07-efecbf67926f,version=1.0}]}]} policy-xacml-pdp | [2025-06-22T18:34:40.015+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-22T18:34:40.015+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator policy-xacml-pdp | [2025-06-22T18:34:40.015+00:00|INFO|MonitoringPdpApplication|qtp2014233765-30] Unsupported query param for Monitoring application: {null=[]} policy-xacml-pdp | [2025-06-22T18:34:40.018+00:00|INFO|RequestLog|qtp2014233765-30] 172.17.0.6 - policyadmin [22/Jun/2025:18:34:40 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1055 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-22T18:34:40.032+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-32] Converting Request DecisionRequest(onapName=SDNC, onapComponent=SDNC-component, onapInstance=SDNC-component-instance, requestId=unique-request-sdnc-1, context=null, action=naming, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={nfRole=[], naming-type=[], property-name=[], policy-type=[onap.policies.Naming]}) policy-xacml-pdp | [2025-06-22T18:34:40.033+00:00|WARN|RequestParser|qtp2014233765-32] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:resource:resource-id policy-xacml-pdp | [2025-06-22T18:34:40.033+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-32] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | [2025-06-22T18:34:40.033+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-32] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-22T18:34:40.033+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-32] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-22T18:34:40.034+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-32] Loading policy file /opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml policy-xacml-pdp | [2025-06-22T18:34:40.041+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-32] Root Policies: 1 policy-xacml-pdp | [2025-06-22T18:34:40.041+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-32] Referenced Policies: 0 policy-xacml-pdp | [2025-06-22T18:34:40.041+00:00|INFO|StdPolicyFinder|qtp2014233765-32] Updating policy map with policy e2a7b620-8e57-4e6f-8607-c3485832ead0 version 1.0 policy-xacml-pdp | [2025-06-22T18:34:40.041+00:00|INFO|StdPolicyFinder|qtp2014233765-32] Updating policy map with policy SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP version 1.0.0 policy-xacml-pdp | [2025-06-22T18:34:40.043+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-32] Elapsed Time: 10ms policy-xacml-pdp | [2025-06-22T18:34:40.043+00:00|INFO|StdBaseTranslator|qtp2014233765-32] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component-instance}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:policy-type,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}],includeInResults=true}]}],policyIdentifiers=[{id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP,version=1.0.0}],policySetIdentifiers=[{id=e2a7b620-8e57-4e6f-8607-c3485832ead0,version=1.0}]}]} policy-xacml-pdp | [2025-06-22T18:34:40.043+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-32] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-22T18:34:40.044+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-32] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator policy-xacml-pdp | [2025-06-22T18:34:40.046+00:00|INFO|RequestLog|qtp2014233765-32] 172.17.0.6 - policyadmin [22/Jun/2025:18:34:40 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1598 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-22T18:34:40.061+00:00|INFO|StdMatchableTranslator|qtp2014233765-27] Converting Request DecisionRequest(onapName=OOF, onapComponent=OOF-component, onapInstance=OOF-component-instance, requestId=null, context={subscriberName=[]}, action=optimize, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={scope=[], services=[], resources=[], geography=[]}) policy-xacml-pdp | [2025-06-22T18:34:40.064+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | [2025-06-22T18:34:40.064+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Initializing OnapPolicyFinderFactory Properties policy-xacml-pdp | [2025-06-22T18:34:40.064+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides policy-xacml-pdp | [2025-06-22T18:34:40.064+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Loading policy file /opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml policy-xacml-pdp | [2025-06-22T18:34:40.070+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Root Policies: 1 policy-xacml-pdp | [2025-06-22T18:34:40.070+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Referenced Policies: 0 policy-xacml-pdp | [2025-06-22T18:34:40.070+00:00|INFO|StdPolicyFinder|qtp2014233765-27] Updating policy map with policy d671b36a-6a76-44f7-8a67-49be874da3bb version 1.0 policy-xacml-pdp | [2025-06-22T18:34:40.070+00:00|INFO|StdPolicyFinder|qtp2014233765-27] Updating policy map with policy OSDF_CASABLANCA.Affinity_Default version 1.0.0 policy-xacml-pdp | [2025-06-22T18:34:40.071+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-27] Elapsed Time: 8ms policy-xacml-pdp | [2025-06-22T18:34:40.071+00:00|INFO|StdBaseTranslator|qtp2014233765-27] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OSDF_CASABLANCA.Affinity_Default}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:weight,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#integer,value=0}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.optimization.resource.AffinityPolicy}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component-instance}],includeInResults=true}]}],policyIdentifiers=[{id=OSDF_CASABLANCA.Affinity_Default,version=1.0.0}],policySetIdentifiers=[{id=d671b36a-6a76-44f7-8a67-49be874da3bb,version=1.0}]}]} policy-xacml-pdp | [2025-06-22T18:34:40.071+00:00|INFO|StdMatchableTranslator|qtp2014233765-27] Obligation: urn:org:onap:rest:body policy-xacml-pdp | [2025-06-22T18:34:40.072+00:00|INFO|StdMatchableTranslator|qtp2014233765-27] New entry onap.policies.optimization.resource.AffinityPolicy weight 0 policy-xacml-pdp | [2025-06-22T18:34:40.072+00:00|INFO|StdMatchableTranslator|qtp2014233765-27] Policy (OSDF_CASABLANCA.Affinity_Default,{type=onap.policies.optimization.resource.AffinityPolicy, type_version=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}, name=OSDF_CASABLANCA.Affinity_Default, version=1.0.0, metadata={policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0}}) policy-xacml-pdp | [2025-06-22T18:34:40.073+00:00|INFO|RequestLog|qtp2014233765-27] 172.17.0.6 - policyadmin [22/Jun/2025:18:34:40 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 467 "" "python-requests/2.32.4" policy-xacml-pdp | [2025-06-22T18:34:40.518+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"source":"pap-96c746df-a679-451e-9b08-409400ed4a9d","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"8b3804f0-7cd2-4401-bfcd-8ad64b80a908","timestampMs":1750617280478,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:34:40.519+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=8b3804f0-7cd2-4401-bfcd-8ad64b80a908, timestampMs=1750617280478, name=xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-96c746df-a679-451e-9b08-409400ed4a9d, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[], policiesToBeUndeployed=[onap.restart.tca 1.0.0]) policy-xacml-pdp | [2025-06-22T18:34:40.519+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 policy-xacml-pdp | [2025-06-22T18:34:40.519+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 policy-xacml-pdp | [2025-06-22T18:34:40.519+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 policy-xacml-pdp | [2025-06-22T18:34:40.519+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 policy-xacml-pdp | [2025-06-22T18:34:40.519+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 policy-xacml-pdp | [2025-06-22T18:34:40.520+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties policy-xacml-pdp | [2025-06-22T18:34:40.520+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Unloaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} from application monitoring policy-xacml-pdp | [2025-06-22T18:34:40.521+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"8b3804f0-7cd2-4401-bfcd-8ad64b80a908","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"09f01281-c47c-4f94-a9b2-2dbf68381bf3","timestampMs":1750617280521,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:34:40.526+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"8b3804f0-7cd2-4401-bfcd-8ad64b80a908","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"09f01281-c47c-4f94-a9b2-2dbf68381bf3","timestampMs":1750617280521,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:34:40.527+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-22T18:35:06.483+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=acf694fd-0c8d-4f9b-a2af-9e78ff422ced, timestampMs=1750617306483, name=xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=ACTIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0, OSDF_CASABLANCA.Affinity_Default 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) policy-xacml-pdp | [2025-06-22T18:35:06.483+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"acf694fd-0c8d-4f9b-a2af-9e78ff422ced","timestampMs":1750617306483,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:35:06.494+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"acf694fd-0c8d-4f9b-a2af-9e78ff422ced","timestampMs":1750617306483,"name":"xacml-c89e0ec5-b8cd-4bb4-86af-4db8164bc8bf","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} policy-xacml-pdp | [2025-06-22T18:35:06.495+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-xacml-pdp | [2025-06-22T18:35:35.574+00:00|INFO|RequestLog|qtp2014233765-31] 172.17.0.2 - policyadmin [22/Jun/2025:18:35:35 +0000] "GET /metrics HTTP/1.1" 200 2217 "" "Prometheus/3.4.1" postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | waiting for server to start....2025-06-22 18:32:21.223 UTC [48] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-22 18:32:21.226 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-22 18:32:21.231 UTC [51] LOG: database system was shut down at 2025-06-22 18:32:20 UTC postgres | 2025-06-22 18:32:21.235 UTC [48] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | 2025-06-22 18:32:22.670 UTC [48] LOG: received fast shutdown request postgres | waiting for server to shut down....2025-06-22 18:32:22.672 UTC [48] LOG: aborting any active transactions postgres | 2025-06-22 18:32:22.674 UTC [48] LOG: background worker "logical replication launcher" (PID 54) exited with exit code 1 postgres | 2025-06-22 18:32:22.677 UTC [49] LOG: shutting down postgres | 2025-06-22 18:32:22.678 UTC [49] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-22 18:32:23.462 UTC [49] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.695 s, sync=0.080 s, total=0.785 s; sync files=1788, longest=0.009 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-22 18:32:23.474 UTC [48] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-22 18:32:23.601 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-22 18:32:23.602 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-22 18:32:23.602 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-22 18:32:23.605 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-22 18:32:23.629 UTC [101] LOG: database system was shut down at 2025-06-22 18:32:23 UTC postgres | 2025-06-22 18:32:23.635 UTC [1] LOG: database system is ready to accept connections prometheus | time=2025-06-22T18:32:25.236Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-22T18:32:25.236Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-22T18:32:25.236Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-22T18:32:25.237Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-22T18:32:25.241Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-22T18:32:25.242Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-22T18:32:25.244Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-22T18:32:25.244Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-22T18:32:25.248Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-22T18:32:25.248Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1µs prometheus | time=2025-06-22T18:32:25.248Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-22T18:32:25.249Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=618.225µs prometheus | time=2025-06-22T18:32:25.249Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=44.471µs wal_replay_duration=648.915µs wbl_replay_duration=180ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1µs total_replay_duration=818.907µs prometheus | time=2025-06-22T18:32:25.252Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-22T18:32:25.252Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-22T18:32:25.252Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-22T18:32:25.254Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-22T18:32:25.254Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.31µs remote_storage=2.19µs web_handler=640ns query_engine=1.88µs scrape=280.042µs scrape_sd=503.434µs notify=152.912µs notify_sd=13.93µs rules=2.2µs tracing=4.45µs filename=/etc/prometheus/prometheus.yml totalDuration=1.667795ms prometheus | time=2025-06-22T18:32:25.254Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-22T18:32:25.254Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-22 18:32:21,360] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-22 18:32:21,362] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-22 18:32:21,362] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-22 18:32:21,362] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-22 18:32:21,362] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-22 18:32:21,364] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-22 18:32:21,364] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-22 18:32:21,364] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-22 18:32:21,364] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-22 18:32:21,365] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-22 18:32:21,365] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-22 18:32:21,366] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-22 18:32:21,366] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-22 18:32:21,366] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-22 18:32:21,366] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-22 18:32:21,367] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-22 18:32:21,377] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-22 18:32:21,379] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-22 18:32:21,379] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-22 18:32:21,381] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-22 18:32:21,389] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,389] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,389] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,389] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,389] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,389] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,389] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,389] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,389] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,389] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,390] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,390] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,390] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,390] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,390] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,390] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,391] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,392] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,392] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-22 18:32:21,393] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,393] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,394] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-22 18:32:21,394] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-22 18:32:21,395] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-22 18:32:21,395] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-22 18:32:21,395] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-22 18:32:21,395] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-22 18:32:21,395] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-22 18:32:21,395] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-22 18:32:21,397] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,397] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,398] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-22 18:32:21,398] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-22 18:32:21,398] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,418] INFO Logging initialized @421ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-22 18:32:21,473] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-22 18:32:21,473] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-22 18:32:21,489] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-22 18:32:21,521] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-22 18:32:21,521] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-22 18:32:21,522] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-22 18:32:21,525] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-22 18:32:21,534] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-22 18:32:21,545] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-22 18:32:21,545] INFO Started @552ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-22 18:32:21,545] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-22 18:32:21,548] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-22 18:32:21,549] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-22 18:32:21,550] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-22 18:32:21,551] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-22 18:32:21,564] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-22 18:32:21,564] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-22 18:32:21,565] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-22 18:32:21,565] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-22 18:32:21,570] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-22 18:32:21,570] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-22 18:32:21,577] INFO Snapshot loaded in 12 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-22 18:32:21,578] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-22 18:32:21,579] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-22 18:32:21,589] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-22 18:32:21,591] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-22 18:32:21,606] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-22 18:32:21,607] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-22 18:32:22,640] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container policy-xacml-pdp Stopping Container policy-csit Stopping Container grafana Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-xacml-pdp Stopped Container policy-xacml-pdp Removing Container policy-xacml-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2036 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins1259286750875560867.sh ---> sysstat.sh [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins14676192592981100895.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp ']' + mkdir -p /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/archives/ [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins18388936024120292914.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-0TrN from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-0TrN/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins16281991016059697097.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp@tmp/config1928496909571023043tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins3704421548009560304.sh ---> create-netrc.sh [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins2688124968277317590.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-0TrN from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-0TrN/bin to PATH [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins7728267531564305939.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash /tmp/jenkins4689496677228159654.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-0TrN from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-0TrN/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-xacml-pdp-master-project-csit-xacml-pdp] $ /bin/bash -l /tmp/jenkins4235753122405872333.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-xacml-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-0TrN from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-0TrN/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-xacml-pdp-master-project-csit-xacml-pdp/2018 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-23044 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 893 24278 0 6995 30818 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:31:b0:90 brd ff:ff:ff:ff:ff:ff inet 10.30.107.29/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85989sec preferred_lft 85989sec inet6 fe80::f816:3eff:fe31:b090/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:18:75:36:67 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:18ff:fe75:3667/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-23044) 06/22/25 _x86_64_ (8 CPU) 18:30:10 LINUX RESTART (8 CPU) 18:31:01 tps rtps wtps bread/s bwrtn/s 18:32:01 300.50 19.89 280.61 2297.92 162930.19 18:33:01 553.44 4.17 549.28 415.13 145919.95 18:34:01 179.57 0.10 179.47 10.80 27264.92 18:35:01 50.19 0.25 49.94 20.13 6973.50 18:36:01 20.20 0.08 20.11 11.46 399.00 Average: 220.83 4.91 215.92 552.14 68754.01 18:31:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 18:32:01 25336984 31593952 7602236 23.08 135996 6239316 1936324 5.70 1052812 6015548 1956608 18:33:01 23324608 29757004 9614612 29.19 159480 6377928 8209408 24.15 3112572 5867276 2444 18:34:01 22526616 29798612 10412604 31.61 193768 7112308 8083872 23.78 3196852 6503788 117988 18:35:01 22636600 29586968 10302620 31.28 195572 6807840 8375428 24.64 3388416 6217404 180 18:36:01 23135972 30043144 9803248 29.76 195908 6770956 6591580 19.39 2955044 6172024 312 Average: 23392156 30155936 9547064 28.98 176145 6661670 6639322 19.53 2741139 6155208 415506 18:31:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 18:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:32:01 ens3 1280.35 752.67 32780.15 63.30 0.00 0.00 0.00 0.00 18:32:01 lo 13.29 13.29 1.23 1.23 0.00 0.00 0.00 0.00 18:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:01 br-5e1d5457ca43 18.88 24.83 1.24 307.00 0.00 0.00 0.00 0.00 18:33:01 veth5a340ae 5.35 4.58 0.67 0.57 0.00 0.00 0.00 0.00 18:33:01 veth436b8e9 91.78 91.53 15.60 18.61 0.00 0.00 0.00 0.00 18:34:01 docker0 126.25 173.45 8.09 1348.07 0.00 0.00 0.00 0.00 18:34:01 br-5e1d5457ca43 0.37 0.40 0.03 0.03 0.00 0.00 0.00 0.00 18:34:01 veth5a340ae 139.43 141.58 16.45 34.07 0.00 0.00 0.00 0.00 18:34:01 veth436b8e9 0.13 0.18 0.43 0.02 0.00 0.00 0.00 0.00 18:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:35:01 br-5e1d5457ca43 0.45 0.20 0.02 0.01 0.00 0.00 0.00 0.00 18:35:01 veth5a340ae 381.82 383.59 41.60 72.22 0.00 0.00 0.00 0.01 18:35:01 veth436b8e9 130.14 129.81 14.96 27.83 0.00 0.00 0.00 0.00 18:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:36:01 br-5e1d5457ca43 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:36:01 veth5a340ae 113.01 115.83 12.72 20.98 0.00 0.00 0.00 0.00 18:36:01 veth436b8e9 0.35 0.57 0.58 0.04 0.00 0.00 0.00 0.00 Average: docker0 25.23 34.67 1.62 269.45 0.00 0.00 0.00 0.00 Average: br-5e1d5457ca43 3.94 5.08 0.26 61.37 0.00 0.00 0.00 0.00 Average: veth5a340ae 127.85 129.04 14.28 25.55 0.00 0.00 0.00 0.00 Average: veth436b8e9 44.46 44.39 6.31 9.30 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-23044) 06/22/25 _x86_64_ (8 CPU) 18:30:10 LINUX RESTART (8 CPU) 18:31:01 CPU %user %nice %system %iowait %steal %idle 18:32:01 all 18.10 0.00 5.98 4.14 0.05 71.73 18:32:01 0 15.92 0.00 6.49 1.31 0.05 76.23 18:32:01 1 13.18 0.00 6.08 1.04 0.05 79.64 18:32:01 2 24.54 0.00 6.03 3.60 0.05 65.79 18:32:01 3 12.54 0.00 5.35 0.77 0.03 81.31 18:32:01 4 11.20 0.00 5.86 2.58 0.05 80.31 18:32:01 5 18.84 0.00 5.57 1.66 0.05 73.88 18:32:01 6 11.18 0.00 5.45 16.72 0.05 66.60 18:32:01 7 37.38 0.00 6.94 5.53 0.10 50.06 18:33:01 all 28.96 0.00 4.89 4.88 0.08 61.19 18:33:01 0 31.83 0.00 5.01 1.46 0.08 61.61 18:33:01 1 27.06 0.00 4.60 0.97 0.07 67.30 18:33:01 2 31.20 0.00 4.73 1.06 0.07 62.94 18:33:01 3 30.50 0.00 4.68 5.05 0.08 59.69 18:33:01 4 34.94 0.00 5.39 5.98 0.08 53.60 18:33:01 5 18.28 0.00 3.75 5.15 0.07 72.75 18:33:01 6 27.76 0.00 6.56 17.18 0.10 48.40 18:33:01 7 30.09 0.00 4.45 2.25 0.08 63.13 18:34:01 all 9.05 0.00 1.93 1.10 0.07 87.86 18:34:01 0 9.60 0.00 1.57 2.07 0.07 86.69 18:34:01 1 10.31 0.00 2.40 0.32 0.07 86.91 18:34:01 2 12.89 0.00 2.18 0.30 0.08 84.54 18:34:01 3 10.18 0.00 2.03 0.49 0.07 87.24 18:34:01 4 6.37 0.00 2.05 1.60 0.07 89.92 18:34:01 5 5.75 0.00 1.32 1.96 0.07 90.90 18:34:01 6 10.07 0.00 2.10 1.60 0.07 86.17 18:34:01 7 7.20 0.00 1.77 0.51 0.07 90.46 18:35:01 all 6.98 0.00 1.36 0.25 0.05 91.35 18:35:01 0 6.23 0.00 2.23 0.03 0.05 91.46 18:35:01 1 6.71 0.00 1.56 0.07 0.03 91.64 18:35:01 2 7.56 0.00 0.95 0.00 0.05 91.44 18:35:01 3 5.98 0.00 1.22 0.22 0.03 92.55 18:35:01 4 6.35 0.00 1.25 0.60 0.05 91.74 18:35:01 5 9.37 0.00 1.34 1.09 0.07 88.14 18:35:01 6 6.68 0.00 1.02 0.03 0.03 92.23 18:35:01 7 6.95 0.00 1.28 0.00 0.07 91.71 18:36:01 all 1.56 0.00 0.52 0.05 0.05 97.81 18:36:01 0 1.40 0.00 0.48 0.00 0.07 98.05 18:36:01 1 1.15 0.00 0.50 0.17 0.05 98.13 18:36:01 2 2.00 0.00 0.65 0.07 0.07 97.21 18:36:01 3 1.60 0.00 0.38 0.02 0.05 97.95 18:36:01 4 1.68 0.00 0.83 0.08 0.07 97.33 18:36:01 5 1.27 0.00 0.33 0.02 0.07 98.31 18:36:01 6 1.92 0.00 0.57 0.00 0.03 97.48 18:36:01 7 1.46 0.00 0.49 0.00 0.05 98.01 Average: all 12.91 0.00 2.93 2.08 0.06 82.02 Average: 0 12.97 0.00 3.15 0.97 0.06 82.84 Average: 1 11.67 0.00 3.02 0.51 0.05 84.75 Average: 2 15.62 0.00 2.91 1.00 0.06 80.41 Average: 3 12.15 0.00 2.73 1.31 0.05 83.76 Average: 4 12.08 0.00 3.07 2.16 0.06 82.61 Average: 5 10.69 0.00 2.46 1.97 0.06 84.82 Average: 6 11.49 0.00 3.13 7.07 0.06 78.25 Average: 7 16.60 0.00 2.98 1.66 0.07 78.68